[jira] [Closed] (IMPALA-8878) Impala Doc: Query Profile Exported to JSON

2019-09-20 Thread Alex Rodoni (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rodoni closed IMPALA-8878.
---
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Impala Doc: Query Profile Exported to JSON
> --
>
> Key: IMPALA-8878
> URL: https://issues.apache.org/jira/browse/IMPALA-8878
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Docs
>Reporter: Alex Rodoni
>Assignee: Alex Rodoni
>Priority: Major
>  Labels: future_release_doc, in_34
> Fix For: Impala 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (IMPALA-8878) Impala Doc: Query Profile Exported to JSON

2019-09-20 Thread Alex Rodoni (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rodoni closed IMPALA-8878.
---
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Impala Doc: Query Profile Exported to JSON
> --
>
> Key: IMPALA-8878
> URL: https://issues.apache.org/jira/browse/IMPALA-8878
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Docs
>Reporter: Alex Rodoni
>Assignee: Alex Rodoni
>Priority: Major
>  Labels: future_release_doc, in_34
> Fix For: Impala 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8962) FETCH_ROWS_TIMEOUT_MS should apply before rows are available

2019-09-20 Thread Tim Armstrong (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934809#comment-16934809
 ] 

Tim Armstrong commented on IMPALA-8962:
---

I think it's unlikely that it's safe to remove BlockOnWait() - I think we need 
that currently to get to the point where the Coordinator is instantiated. I'm 
not sure if the HS2 protocol is clear on *when* in the query state machine it 
is valid to call fetch, but there's potentially a long time that elapses 
between the Exec() call and when the Coordinator and PlanRootSink are 
instantiated. E.g. this includes time in admission control.

> FETCH_ROWS_TIMEOUT_MS should apply before rows are available
> 
>
> Key: IMPALA-8962
> URL: https://issues.apache.org/jira/browse/IMPALA-8962
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> IMPALA-7312 added a fetch timeout controlled by the query option 
> {{FETCH_ROWS_TIMEOUT_MS}}. The issue is that the timeout only applies after 
> the *first* batch of rows are available. The issue is that both Beeswax and 
> HS2 clients call {{request_state->BlockOnWait}} inside 
> {{ImpalaServer::FetchInternal}}. The call to {{BlockOnWait}} blocks until 
> rows are ready to be consumed via {{ClientRequestState::FetchRows}}.
> So clients can still end up blocking indefinitely waiting for the first row 
> batch to appear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8878) Impala Doc: Query Profile Exported to JSON

2019-09-20 Thread Alex Rodoni (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934800#comment-16934800
 ] 

Alex Rodoni commented on IMPALA-8878:
-

https://gerrit.cloudera.org/#/c/14276/

> Impala Doc: Query Profile Exported to JSON
> --
>
> Key: IMPALA-8878
> URL: https://issues.apache.org/jira/browse/IMPALA-8878
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Docs
>Reporter: Alex Rodoni
>Assignee: Alex Rodoni
>Priority: Major
>  Labels: future_release_doc, in_34
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-8878) Impala Doc: Query Profile Exported to JSON

2019-09-20 Thread Alex Rodoni (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-8878 started by Alex Rodoni.
---
> Impala Doc: Query Profile Exported to JSON
> --
>
> Key: IMPALA-8878
> URL: https://issues.apache.org/jira/browse/IMPALA-8878
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Docs
>Reporter: Alex Rodoni
>Assignee: Alex Rodoni
>Priority: Major
>  Labels: future_release_doc, in_34
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8263) Planner failed to flip build/probe side of join

2019-09-20 Thread Tim Armstrong (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-8263.
---
Resolution: Not A Bug

The row size of the left size is larger so this is actually correct (Impala 
mostly makes these decisions based on byte size rather than row count).

> Planner failed to flip build/probe side of join
> ---
>
> Key: IMPALA-8263
> URL: https://issues.apache.org/jira/browse/IMPALA-8263
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 3.1.0
>Reporter: Paul Rogers
>Priority: Major
>
> TPC-H query 8 was reviewed after applying the changes proposed for 
> DRILL-8014. (See {{tpch-all.test}}.) The revised plan has better cardinality 
> number, but contains this odd structure:
> {noformat}
> 11:HASH JOIN [INNER JOIN]
> |  hash predicates: o_custkey = c_custkey
> |  row-size=139B cardinality=39.66K
> |
> |--04:SCAN HDFS [tpch.customer]
> | row-size=10B cardinality=150.00K
> |
> 10:HASH JOIN [INNER JOIN]
> |  hash predicates: o_orderkey = l_orderkey
> |  row-size=129B cardinality=39.66K
> {noformat}
> As I understand it, the planner should flip the left and right sides of an 
> inner join if the right side (the 04 scan) has a larger cardinality than the 
> left (join 10) side. That flip did not happen in this case, causing the join 
> to build a hash table about 4 times larger than necessary.
> Perhaps there is some other constraint. Investigate to determine if the 
> behavior is correct (and if so why), or the source of incorrect behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-8263) Planner failed to flip build/probe side of join

2019-09-20 Thread Tim Armstrong (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-8263.
---
Resolution: Not A Bug

The row size of the left size is larger so this is actually correct (Impala 
mostly makes these decisions based on byte size rather than row count).

> Planner failed to flip build/probe side of join
> ---
>
> Key: IMPALA-8263
> URL: https://issues.apache.org/jira/browse/IMPALA-8263
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 3.1.0
>Reporter: Paul Rogers
>Priority: Major
>
> TPC-H query 8 was reviewed after applying the changes proposed for 
> DRILL-8014. (See {{tpch-all.test}}.) The revised plan has better cardinality 
> number, but contains this odd structure:
> {noformat}
> 11:HASH JOIN [INNER JOIN]
> |  hash predicates: o_custkey = c_custkey
> |  row-size=139B cardinality=39.66K
> |
> |--04:SCAN HDFS [tpch.customer]
> | row-size=10B cardinality=150.00K
> |
> 10:HASH JOIN [INNER JOIN]
> |  hash predicates: o_orderkey = l_orderkey
> |  row-size=129B cardinality=39.66K
> {noformat}
> As I understand it, the planner should flip the left and right sides of an 
> inner join if the right side (the 04 scan) has a larger cardinality than the 
> left (join 10) side. That flip did not happen in this case, causing the join 
> to build a hash table about 4 times larger than necessary.
> Perhaps there is some other constraint. Investigate to determine if the 
> behavior is correct (and if so why), or the source of incorrect behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-8647) The cardinality associated with a SelectNode is 0 even if its child node has a non-zero cardinality

2019-09-20 Thread Tim Armstrong (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong reassigned IMPALA-8647:
-

Assignee: Tim Armstrong

> The cardinality associated with a SelectNode is 0 even if its child node has 
> a non-zero cardinality
> ---
>
> Key: IMPALA-8647
> URL: https://issues.apache.org/jira/browse/IMPALA-8647
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Fang-Yu Rao
>Assignee: Tim Armstrong
>Priority: Minor
>
> Consider the following nested SQL statement.
> {code:java}
> EXPLAIN SELECT *
> FROM functional_parquet.alltypessmall WHERE 1 IN
> (SELECT int_col FROM functional_parquet.alltypestiny LIMIT 1);
> {code}
> It seems that the cardinality of the SelectNode should be 1 instead of 0. 
> Specifically, if we had executed
> {code:java}
> compute stats functional_parquet.alltypestiny{code}
> before issuing this SQL statement at the very beginning, the returned 
> cardinality of this SelectNode would be 1 instead of 0. Not very sure if this 
> is a bug. It looks like the cardinality of a SelectNode depends on whether 
> there is stats information associated with its child node. The cardinality of 
> a SelectNode would still be 0 even if its child node (ExchangeNode in this 
> case) has a non-zero cardinality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-3926) Reconsider use of LD_LIBRARY_PATH for toolchain libraries

2019-09-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934772#comment-16934772
 ] 

ASF subversion and git services commented on IMPALA-3926:
-

Commit 9c367968d00be3e1880c07494dd82ecf077e7ec0 in impala's branch 
refs/heads/master from Tim Armstrong
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=9c36796 ]

IMPALA-3926: part 1: bump toolchain for rpath fixes

This bumps the toolchain to a version that sets up relative
rpaths correctly, removing the need to set LD_LIBRARY_PATH
when calling binaries or using shared objects from the toolchain.

rpaths embedded in the binaries and shared objects take precedence
over LD_LIBRARY_PATH, so the new binaries will link against the
shared libraries from the toolchain, as expected.

This does not attempt to clean up LD_LIBRARY_PATH yet -
that would not work for developers who have older versions
of the binaries on their system. We will do that later,
but give devs a window to switch over to the new toolchain.

Change-Id: I2abe513ebf4f459aa3e0c65c77ce7be8b8ca1221
Reviewed-on: http://gerrit.cloudera.org:8080/14274
Tested-by: Impala Public Jenkins 
Reviewed-by: Joe McDonnell 


> Reconsider use of LD_LIBRARY_PATH for toolchain libraries
> -
>
> Key: IMPALA-3926
> URL: https://issues.apache.org/jira/browse/IMPALA-3926
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 2.6.0
>Reporter: Matthew Jacobs
>Assignee: Tim Armstrong
>Priority: Major
>  Labels: build, toolchain
>
> Right now, impala-config.sh puts a lot of libraries in LD_LIBRARY_PATH, but 
> this can be a problem for binaries that aren't from our builds or explicitly 
> built against these specific libraries. One solution is to move any tools we 
> need into the toolchain and build against these libraries. While this may be 
> a reasonable thing to do (i.e. moving all tools we need into the toolchain), 
> we should consider if setting LD_LIBRARY_PATH for the whole Impala 
> environment is really necessary and the right thing to do (e.g. [some people 
> say using LD_LIBRARY_PATH is 
> bad|http://xahlee.info/UnixResource_dir/_/ldpath.html]).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8858) Add metrics to improve observability of idle executor groups

2019-09-20 Thread Bikramjeet Vig (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikramjeet Vig resolved IMPALA-8858.

Fix Version/s: Impala 3.4.0
   Resolution: Fixed

>  Add metrics to improve observability of idle executor groups
> -
>
> Key: IMPALA-8858
> URL: https://issues.apache.org/jira/browse/IMPALA-8858
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Bikramjeet Vig
>Priority: Major
> Fix For: Impala 3.4.0
>
>
> The metrics provided in [IMPALA-8806] are useful but there is not a way to 
> see how many executor groups are idle. 
> Please add a metric which measures the number of executor groups that are 
> both healthy and idle.
> It is OK to for this to be 0 for a short time for example when an executor 
> group is created.
> This should be part of the new "cluster-membership" metric group.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8858) Add metrics to improve observability of idle executor groups

2019-09-20 Thread Bikramjeet Vig (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikramjeet Vig resolved IMPALA-8858.

Fix Version/s: Impala 3.4.0
   Resolution: Fixed

>  Add metrics to improve observability of idle executor groups
> -
>
> Key: IMPALA-8858
> URL: https://issues.apache.org/jira/browse/IMPALA-8858
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Bikramjeet Vig
>Priority: Major
> Fix For: Impala 3.4.0
>
>
> The metrics provided in [IMPALA-8806] are useful but there is not a way to 
> see how many executor groups are idle. 
> Please add a metric which measures the number of executor groups that are 
> both healthy and idle.
> It is OK to for this to be 0 for a short time for example when an executor 
> group is created.
> This should be part of the new "cluster-membership" metric group.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IMPALA-4050) Support starting webserver specified by hostname

2019-09-20 Thread Thomas Tauber-Marshall (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-4050:
--

Assignee: Thomas Tauber-Marshall  (was: hewenting)

> Support starting webserver specified by hostname
> 
>
> Key: IMPALA-4050
> URL: https://issues.apache.org/jira/browse/IMPALA-4050
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend
>Affects Versions: Impala 2.7.0
> Environment: LSB Version: 
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> Distributor ID:   CentOS
> Description:  CentOS release 6.7 (Final)
> Release:  6.7
> Codename: Final
>Reporter: hewenting
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: impala, webserver
>
> Start impala service failed when add impala arguments webserver_interface 
> valued by hostname.
> my command: ./bin/start-impala-cluster.py -s 1 --impalad_args 
> '-webserver_interface={color:red}nobida141{color}'.
> log displayed on terminal:
> {noformat}
> MainThread: Found 1 impalad/1 statestored/1 catalogd process(es)
> MainThread: Getting num_known_live_backends from nobida141:25000
> MainThread: Debug webpage not yet available.
> MainThread: Debug webpage not yet available.
> ...
> MainThread: Debug webpage did not become available in expected time.
> MainThread: Waiting for num_known_live_backends=1. Current value: None
> Error starting cluster: num_known_live_backends did not reach expected value 
> in time
> {noformat}
> log on file Impalad.INFO:
> {noformat}
> I0831 13:49:11.234997 419171 webserver.cc:365] Webserver: set_ports_option: 
> nobida141:25000: invalid port spec. Expecting list of: [IP_ADDRESS:]PORT[s|r]
> I0831 13:49:11.334241 419171 status.cc:114] Webserver: Could not start on 
> address nobida141:25000
> @  0x11b4e29  impala::Status::Status()
> @  0x15fef7f  impala::Webserver::Start()
> @  0x1348f8b  impala::ExecEnv::StartServices()
> @  0x1534a59  ImpaladMain()
> @  0x115ec40  main
> @   0x3f8981ed5d  (unknown)
> @  0x115ea4d  (unknown)
> E0831 13:49:11.334287 419171 impalad-main.cc:87] Impalad services did not 
> start correctly, exiting.  Error: Webserver: Could not start on address 
> nobida141:25000
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-4050) Support starting webserver specified by hostname

2019-09-20 Thread Thomas Tauber-Marshall (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-4050.

Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Support starting webserver specified by hostname
> 
>
> Key: IMPALA-4050
> URL: https://issues.apache.org/jira/browse/IMPALA-4050
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend
>Affects Versions: Impala 2.7.0
> Environment: LSB Version: 
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> Distributor ID:   CentOS
> Description:  CentOS release 6.7 (Final)
> Release:  6.7
> Codename: Final
>Reporter: hewenting
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: impala, webserver
> Fix For: Impala 3.4.0
>
>
> Start impala service failed when add impala arguments webserver_interface 
> valued by hostname.
> my command: ./bin/start-impala-cluster.py -s 1 --impalad_args 
> '-webserver_interface={color:red}nobida141{color}'.
> log displayed on terminal:
> {noformat}
> MainThread: Found 1 impalad/1 statestored/1 catalogd process(es)
> MainThread: Getting num_known_live_backends from nobida141:25000
> MainThread: Debug webpage not yet available.
> MainThread: Debug webpage not yet available.
> ...
> MainThread: Debug webpage did not become available in expected time.
> MainThread: Waiting for num_known_live_backends=1. Current value: None
> Error starting cluster: num_known_live_backends did not reach expected value 
> in time
> {noformat}
> log on file Impalad.INFO:
> {noformat}
> I0831 13:49:11.234997 419171 webserver.cc:365] Webserver: set_ports_option: 
> nobida141:25000: invalid port spec. Expecting list of: [IP_ADDRESS:]PORT[s|r]
> I0831 13:49:11.334241 419171 status.cc:114] Webserver: Could not start on 
> address nobida141:25000
> @  0x11b4e29  impala::Status::Status()
> @  0x15fef7f  impala::Webserver::Start()
> @  0x1348f8b  impala::ExecEnv::StartServices()
> @  0x1534a59  ImpaladMain()
> @  0x115ec40  main
> @   0x3f8981ed5d  (unknown)
> @  0x115ea4d  (unknown)
> E0831 13:49:11.334287 419171 impalad-main.cc:87] Impalad services did not 
> start correctly, exiting.  Error: Webserver: Could not start on address 
> nobida141:25000
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-4050) Support starting webserver specified by hostname

2019-09-20 Thread Thomas Tauber-Marshall (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-4050.

Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Support starting webserver specified by hostname
> 
>
> Key: IMPALA-4050
> URL: https://issues.apache.org/jira/browse/IMPALA-4050
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend
>Affects Versions: Impala 2.7.0
> Environment: LSB Version: 
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> Distributor ID:   CentOS
> Description:  CentOS release 6.7 (Final)
> Release:  6.7
> Codename: Final
>Reporter: hewenting
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: impala, webserver
> Fix For: Impala 3.4.0
>
>
> Start impala service failed when add impala arguments webserver_interface 
> valued by hostname.
> my command: ./bin/start-impala-cluster.py -s 1 --impalad_args 
> '-webserver_interface={color:red}nobida141{color}'.
> log displayed on terminal:
> {noformat}
> MainThread: Found 1 impalad/1 statestored/1 catalogd process(es)
> MainThread: Getting num_known_live_backends from nobida141:25000
> MainThread: Debug webpage not yet available.
> MainThread: Debug webpage not yet available.
> ...
> MainThread: Debug webpage did not become available in expected time.
> MainThread: Waiting for num_known_live_backends=1. Current value: None
> Error starting cluster: num_known_live_backends did not reach expected value 
> in time
> {noformat}
> log on file Impalad.INFO:
> {noformat}
> I0831 13:49:11.234997 419171 webserver.cc:365] Webserver: set_ports_option: 
> nobida141:25000: invalid port spec. Expecting list of: [IP_ADDRESS:]PORT[s|r]
> I0831 13:49:11.334241 419171 status.cc:114] Webserver: Could not start on 
> address nobida141:25000
> @  0x11b4e29  impala::Status::Status()
> @  0x15fef7f  impala::Webserver::Start()
> @  0x1348f8b  impala::ExecEnv::StartServices()
> @  0x1534a59  ImpaladMain()
> @  0x115ec40  main
> @   0x3f8981ed5d  (unknown)
> @  0x115ea4d  (unknown)
> E0831 13:49:11.334287 419171 impalad-main.cc:87] Impalad services did not 
> start correctly, exiting.  Error: Webserver: Could not start on address 
> nobida141:25000
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IMPALA-4057) Start webserver with interface"127.0.0.1" failed.

2019-09-20 Thread Thomas Tauber-Marshall (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-4057:
--

Assignee: Thomas Tauber-Marshall  (was: hewenting)

> Start webserver with interface"127.0.0.1" failed.
> -
>
> Key: IMPALA-4057
> URL: https://issues.apache.org/jira/browse/IMPALA-4057
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 2.7.0
> Environment: bash-4.1$ lsb_release -a
> LSB Version:  
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> Distributor ID:   CentOS
> Description:  CentOS release 6.7 (Final)
> Release:  6.7
> Codename: Final
> bash-4.1$
>Reporter: hewenting
>Assignee: Thomas Tauber-Marshall
>Priority: Trivial
>  Labels: impala, webserver
> Fix For: Impala 3.4.0
>
>
> Start impala with option -websever_interface=127.0.0.1 failed.
> log displayed on terminal:
> {noformat}
> bash-4.1$ ./bin/start-impala-cluster.py -s 1 --impalad_args 
> "-webserver_interface=127.0.0.1"
> Starting State Store logging to 
> /home/impala/incubator-impala/logs/cluster/statestored.INFO
> Starting Catalog Service logging to 
> /home/impala/incubator-impala/logs/cluster/catalogd.INFO
> Starting Impala Daemon logging to 
> /home/impala/incubator-impala/logs/cluster/impalad.INFO
> MainThread: Found 1 impalad/1 statestored/1 catalogd process(es)
> MainThread: Getting num_known_live_backends from nobida141:25000
> MainThread: Debug webpage not yet available.
> ...
> MainThread: Debug webpage not yet available.
> MainThread: Debug webpage did not become available in expected time.
> MainThread: Waiting for num_known_live_backends=1. Current value: None
> Error starting cluster: num_known_live_backends did not reach expected value 
> in time
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-4057) Start webserver with interface"127.0.0.1" failed.

2019-09-20 Thread Thomas Tauber-Marshall (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-4057.

Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Start webserver with interface"127.0.0.1" failed.
> -
>
> Key: IMPALA-4057
> URL: https://issues.apache.org/jira/browse/IMPALA-4057
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 2.7.0
> Environment: bash-4.1$ lsb_release -a
> LSB Version:  
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> Distributor ID:   CentOS
> Description:  CentOS release 6.7 (Final)
> Release:  6.7
> Codename: Final
> bash-4.1$
>Reporter: hewenting
>Assignee: hewenting
>Priority: Trivial
>  Labels: impala, webserver
> Fix For: Impala 3.4.0
>
>
> Start impala with option -websever_interface=127.0.0.1 failed.
> log displayed on terminal:
> {noformat}
> bash-4.1$ ./bin/start-impala-cluster.py -s 1 --impalad_args 
> "-webserver_interface=127.0.0.1"
> Starting State Store logging to 
> /home/impala/incubator-impala/logs/cluster/statestored.INFO
> Starting Catalog Service logging to 
> /home/impala/incubator-impala/logs/cluster/catalogd.INFO
> Starting Impala Daemon logging to 
> /home/impala/incubator-impala/logs/cluster/impalad.INFO
> MainThread: Found 1 impalad/1 statestored/1 catalogd process(es)
> MainThread: Getting num_known_live_backends from nobida141:25000
> MainThread: Debug webpage not yet available.
> ...
> MainThread: Debug webpage not yet available.
> MainThread: Debug webpage did not become available in expected time.
> MainThread: Waiting for num_known_live_backends=1. Current value: None
> Error starting cluster: num_known_live_backends did not reach expected value 
> in time
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-4057) Start webserver with interface"127.0.0.1" failed.

2019-09-20 Thread Thomas Tauber-Marshall (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-4057.

Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Start webserver with interface"127.0.0.1" failed.
> -
>
> Key: IMPALA-4057
> URL: https://issues.apache.org/jira/browse/IMPALA-4057
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 2.7.0
> Environment: bash-4.1$ lsb_release -a
> LSB Version:  
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> Distributor ID:   CentOS
> Description:  CentOS release 6.7 (Final)
> Release:  6.7
> Codename: Final
> bash-4.1$
>Reporter: hewenting
>Assignee: hewenting
>Priority: Trivial
>  Labels: impala, webserver
> Fix For: Impala 3.4.0
>
>
> Start impala with option -websever_interface=127.0.0.1 failed.
> log displayed on terminal:
> {noformat}
> bash-4.1$ ./bin/start-impala-cluster.py -s 1 --impalad_args 
> "-webserver_interface=127.0.0.1"
> Starting State Store logging to 
> /home/impala/incubator-impala/logs/cluster/statestored.INFO
> Starting Catalog Service logging to 
> /home/impala/incubator-impala/logs/cluster/catalogd.INFO
> Starting Impala Daemon logging to 
> /home/impala/incubator-impala/logs/cluster/impalad.INFO
> MainThread: Found 1 impalad/1 statestored/1 catalogd process(es)
> MainThread: Getting num_known_live_backends from nobida141:25000
> MainThread: Debug webpage not yet available.
> ...
> MainThread: Debug webpage not yet available.
> MainThread: Debug webpage did not become available in expected time.
> MainThread: Waiting for num_known_live_backends=1. Current value: None
> Error starting cluster: num_known_live_backends did not reach expected value 
> in time
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-4057) Start webserver with interface"127.0.0.1" failed.

2019-09-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934749#comment-16934749
 ] 

ASF subversion and git services commented on IMPALA-4057:
-

Commit 16e0d550c28a59510fb152d1d2ad32ac9dac741c in impala's branch 
refs/heads/master from Thomas Tauber-Marshall
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=16e0d55 ]

IMPALA-4057, IMPALA-4050: Fix --webserver_interface

This patch fixes two issues with --webserver_interface:
- When --webserver_interface was used start-impala-cluster.py with a
  value that's different from --hostname, minicluster startup would
  appear to fail as liveness is determined by checking for the webui's
  availability at the address specified for --hostname.
- The value of --webserver_interface was applied correctly for the
  catalogd and statestored but not for impalads, due to the way
  ExecEnv constructed the Webserver.
- It is now possible to specify a hostname for webserver_interface
  instead of an IP. The webserver will resolve the hostname.

This patch also upgrades our version of psutil to the latest for the
function 'net_if_addrs'. This requires a few change to our use of
psutil, mostly adding '()' to call functions that previously were
variables.

Testing:
- Added a custom cluster test that finds all available interfaces,
  binds the webserver to one of them, and checks that its only
  available over that interface.

Change-Id: Ic7e75908426756d73f13a0fa3cfc21fc31da164c
Reviewed-on: http://gerrit.cloudera.org:8080/14266
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Start webserver with interface"127.0.0.1" failed.
> -
>
> Key: IMPALA-4057
> URL: https://issues.apache.org/jira/browse/IMPALA-4057
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 2.7.0
> Environment: bash-4.1$ lsb_release -a
> LSB Version:  
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> Distributor ID:   CentOS
> Description:  CentOS release 6.7 (Final)
> Release:  6.7
> Codename: Final
> bash-4.1$
>Reporter: hewenting
>Assignee: hewenting
>Priority: Trivial
>  Labels: impala, webserver
>
> Start impala with option -websever_interface=127.0.0.1 failed.
> log displayed on terminal:
> {noformat}
> bash-4.1$ ./bin/start-impala-cluster.py -s 1 --impalad_args 
> "-webserver_interface=127.0.0.1"
> Starting State Store logging to 
> /home/impala/incubator-impala/logs/cluster/statestored.INFO
> Starting Catalog Service logging to 
> /home/impala/incubator-impala/logs/cluster/catalogd.INFO
> Starting Impala Daemon logging to 
> /home/impala/incubator-impala/logs/cluster/impalad.INFO
> MainThread: Found 1 impalad/1 statestored/1 catalogd process(es)
> MainThread: Getting num_known_live_backends from nobida141:25000
> MainThread: Debug webpage not yet available.
> ...
> MainThread: Debug webpage not yet available.
> MainThread: Debug webpage did not become available in expected time.
> MainThread: Waiting for num_known_live_backends=1. Current value: None
> Error starting cluster: num_known_live_backends did not reach expected value 
> in time
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-4050) Support starting webserver specified by hostname

2019-09-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934750#comment-16934750
 ] 

ASF subversion and git services commented on IMPALA-4050:
-

Commit 16e0d550c28a59510fb152d1d2ad32ac9dac741c in impala's branch 
refs/heads/master from Thomas Tauber-Marshall
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=16e0d55 ]

IMPALA-4057, IMPALA-4050: Fix --webserver_interface

This patch fixes two issues with --webserver_interface:
- When --webserver_interface was used start-impala-cluster.py with a
  value that's different from --hostname, minicluster startup would
  appear to fail as liveness is determined by checking for the webui's
  availability at the address specified for --hostname.
- The value of --webserver_interface was applied correctly for the
  catalogd and statestored but not for impalads, due to the way
  ExecEnv constructed the Webserver.
- It is now possible to specify a hostname for webserver_interface
  instead of an IP. The webserver will resolve the hostname.

This patch also upgrades our version of psutil to the latest for the
function 'net_if_addrs'. This requires a few change to our use of
psutil, mostly adding '()' to call functions that previously were
variables.

Testing:
- Added a custom cluster test that finds all available interfaces,
  binds the webserver to one of them, and checks that its only
  available over that interface.

Change-Id: Ic7e75908426756d73f13a0fa3cfc21fc31da164c
Reviewed-on: http://gerrit.cloudera.org:8080/14266
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Support starting webserver specified by hostname
> 
>
> Key: IMPALA-4050
> URL: https://issues.apache.org/jira/browse/IMPALA-4050
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend
>Affects Versions: Impala 2.7.0
> Environment: LSB Version: 
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> Distributor ID:   CentOS
> Description:  CentOS release 6.7 (Final)
> Release:  6.7
> Codename: Final
>Reporter: hewenting
>Assignee: hewenting
>Priority: Major
>  Labels: impala, webserver
>
> Start impala service failed when add impala arguments webserver_interface 
> valued by hostname.
> my command: ./bin/start-impala-cluster.py -s 1 --impalad_args 
> '-webserver_interface={color:red}nobida141{color}'.
> log displayed on terminal:
> {noformat}
> MainThread: Found 1 impalad/1 statestored/1 catalogd process(es)
> MainThread: Getting num_known_live_backends from nobida141:25000
> MainThread: Debug webpage not yet available.
> MainThread: Debug webpage not yet available.
> ...
> MainThread: Debug webpage did not become available in expected time.
> MainThread: Waiting for num_known_live_backends=1. Current value: None
> Error starting cluster: num_known_live_backends did not reach expected value 
> in time
> {noformat}
> log on file Impalad.INFO:
> {noformat}
> I0831 13:49:11.234997 419171 webserver.cc:365] Webserver: set_ports_option: 
> nobida141:25000: invalid port spec. Expecting list of: [IP_ADDRESS:]PORT[s|r]
> I0831 13:49:11.334241 419171 status.cc:114] Webserver: Could not start on 
> address nobida141:25000
> @  0x11b4e29  impala::Status::Status()
> @  0x15fef7f  impala::Webserver::Start()
> @  0x1348f8b  impala::ExecEnv::StartServices()
> @  0x1534a59  ImpaladMain()
> @  0x115ec40  main
> @   0x3f8981ed5d  (unknown)
> @  0x115ea4d  (unknown)
> E0831 13:49:11.334287 419171 impalad-main.cc:87] Impalad services did not 
> start correctly, exiting.  Error: Webserver: Could not start on address 
> nobida141:25000
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8858) Add metrics to improve observability of idle executor groups

2019-09-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934748#comment-16934748
 ] 

ASF subversion and git services commented on IMPALA-8858:
-

Commit d99bc14936ff4891a8d932e9d6ff63f96d156e7f in impala's branch 
refs/heads/master from Bikramjeet Vig
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=d99bc14 ]

IMPALA-8858: Add metrics tracking num queries running on executor groups

With this patch, all executor groups with at least one executor
will have a metric added that displays the number of queries
(admitted by the local coordinator) running on them. The metric
is removed only when the group has no executors in it. It gets updated
when either the cluster membership changes or a query gets admitted or
released by the admission controller. Also adds the ability to delete
metrics from a metric group after registration.

Testing:
- Added a custom cluster test and a BE metric test.
- Had to modify some metric tests that relied on ordering of metrics by
their name.

Change-Id: I58cde8699c33af8b87273437e9d8bf6371a34539
Reviewed-on: http://gerrit.cloudera.org:8080/14103
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


>  Add metrics to improve observability of idle executor groups
> -
>
> Key: IMPALA-8858
> URL: https://issues.apache.org/jira/browse/IMPALA-8858
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Bikramjeet Vig
>Priority: Major
>
> The metrics provided in [IMPALA-8806] are useful but there is not a way to 
> see how many executor groups are idle. 
> Please add a metric which measures the number of executor groups that are 
> both healthy and idle.
> It is OK to for this to be 0 for a short time for example when an executor 
> group is created.
> This should be part of the new "cluster-membership" metric group.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-6994) Avoid reloading a table's HMS data for file-only operations

2019-09-20 Thread Tim Armstrong (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong reassigned IMPALA-6994:
-

Assignee: (was: Pranay Singh)

> Avoid reloading a table's HMS data for file-only operations
> ---
>
> Key: IMPALA-6994
> URL: https://issues.apache.org/jira/browse/IMPALA-6994
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Catalog
>Affects Versions: Impala 2.12.0
>Reporter: Balazs Jeszenszky
>Priority: Major
>
> Reloading file metadata for HDFS tables (e.g. as a final step in an 'insert') 
> is done via
> https://github.com/apache/impala/blob/branch-2.12.0/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java#L628
> , which calls
> https://github.com/apache/impala/blob/branch-2.12.0/fe/src/main/java/org/apache/impala/catalog/HdfsTable.java#L1243
> HdfsTable.load has no option to only load file metadata. HMS metadata will 
> also be reloaded every time, which is an unnecessary overhead (and potential 
> point of failure) when adding files to existing locations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8962) FETCH_ROWS_TIMEOUT_MS should apply before rows are available

2019-09-20 Thread Sahil Takiar (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934724#comment-16934724
 ] 

Sahil Takiar commented on IMPALA-8962:
--

Another option would be to make {{FETCH_ROWS_TIMEOUT_MS}} apply across the call 
to {{ClientRequestState::BlockOnWait}} and {{PlanRootSink::GetNext}}, although 
that might be a bit messier to implement.

> FETCH_ROWS_TIMEOUT_MS should apply before rows are available
> 
>
> Key: IMPALA-8962
> URL: https://issues.apache.org/jira/browse/IMPALA-8962
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> IMPALA-7312 added a fetch timeout controlled by the query option 
> {{FETCH_ROWS_TIMEOUT_MS}}. The issue is that the timeout only applies after 
> the *first* batch of rows are available. The issue is that both Beeswax and 
> HS2 clients call {{request_state->BlockOnWait}} inside 
> {{ImpalaServer::FetchInternal}}. The call to {{BlockOnWait}} blocks until 
> rows are ready to be consumed via {{ClientRequestState::FetchRows}}.
> So clients can still end up blocking indefinitely waiting for the first row 
> batch to appear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8962) FETCH_ROWS_TIMEOUT_MS should apply before rows are available

2019-09-20 Thread Sahil Takiar (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934703#comment-16934703
 ] 

Sahil Takiar commented on IMPALA-8962:
--

Not sure of the best way to do this, some notes:
 * Not sure if it is safe to remove the {{BlockOnWait}} call entirely and just 
let the {{PlanRootSink}} block
 * Modify {{BlockOnWait}} to take a timeout, and use the same value as 
{{FETCH_ROWS_TIMEOUT_MS}}
 * Same as above, but use a dedicated config like 
{{FETCH_WAIT_FINISHED_TIMEOUT_MS}} (the amount of time fetch requests wait for 
the query to transition to the 'FINISHED' state before returning)

> FETCH_ROWS_TIMEOUT_MS should apply before rows are available
> 
>
> Key: IMPALA-8962
> URL: https://issues.apache.org/jira/browse/IMPALA-8962
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> IMPALA-7312 added a fetch timeout controlled by the query option 
> {{FETCH_ROWS_TIMEOUT_MS}}. The issue is that the timeout only applies after 
> the *first* batch of rows are available. The issue is that both Beeswax and 
> HS2 clients call {{request_state->BlockOnWait}} inside 
> {{ImpalaServer::FetchInternal}}. The call to {{BlockOnWait}} blocks until 
> rows are ready to be consumed via {{ClientRequestState::FetchRows}}.
> So clients can still end up blocking indefinitely waiting for the first row 
> batch to appear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8962) FETCH_ROWS_TIMEOUT_MS should apply before rows are available

2019-09-20 Thread Sahil Takiar (Jira)
Sahil Takiar created IMPALA-8962:


 Summary: FETCH_ROWS_TIMEOUT_MS should apply before rows are 
available
 Key: IMPALA-8962
 URL: https://issues.apache.org/jira/browse/IMPALA-8962
 Project: IMPALA
  Issue Type: Bug
  Components: Clients
Reporter: Sahil Takiar
Assignee: Sahil Takiar


IMPALA-7312 added a fetch timeout controlled by the query option 
{{FETCH_ROWS_TIMEOUT_MS}}. The issue is that the timeout only applies after the 
*first* batch of rows are available. The issue is that both Beeswax and HS2 
clients call {{request_state->BlockOnWait}} inside 
{{ImpalaServer::FetchInternal}}. The call to {{BlockOnWait}} blocks until rows 
are ready to be consumed via {{ClientRequestState::FetchRows}}.

So clients can still end up blocking indefinitely waiting for the first row 
batch to appear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8962) FETCH_ROWS_TIMEOUT_MS should apply before rows are available

2019-09-20 Thread Sahil Takiar (Jira)
Sahil Takiar created IMPALA-8962:


 Summary: FETCH_ROWS_TIMEOUT_MS should apply before rows are 
available
 Key: IMPALA-8962
 URL: https://issues.apache.org/jira/browse/IMPALA-8962
 Project: IMPALA
  Issue Type: Bug
  Components: Clients
Reporter: Sahil Takiar
Assignee: Sahil Takiar


IMPALA-7312 added a fetch timeout controlled by the query option 
{{FETCH_ROWS_TIMEOUT_MS}}. The issue is that the timeout only applies after the 
*first* batch of rows are available. The issue is that both Beeswax and HS2 
clients call {{request_state->BlockOnWait}} inside 
{{ImpalaServer::FetchInternal}}. The call to {{BlockOnWait}} blocks until rows 
are ready to be consumed via {{ClientRequestState::FetchRows}}.

So clients can still end up blocking indefinitely waiting for the first row 
batch to appear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IMPALA-8916) Fix auto-refresh/manual refresh interaction on webui

2019-09-20 Thread Tim Armstrong (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934685#comment-16934685
 ] 

Tim Armstrong commented on IMPALA-8916:
---

I use firefox so I can probably take a look at some point

> Fix auto-refresh/manual refresh interaction on webui
> 
>
> Key: IMPALA-8916
> URL: https://issues.apache.org/jira/browse/IMPALA-8916
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.4.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Tim Armstrong
>Priority: Major
>
> While clicking around the webui making sure everything works for the Knox 
> integration, I discovered a bug where manually refreshing a page when 
> auto-refresh is turned off leaves the auto-refresh checkbox unchecked but 
> turns auto-refresh back on, as we don't check the value of the checkbox on 
> page load but always just start with auto-refresh on.
> When this happens, it actually makes it very difficult to turn auto-refresh 
> back off, since if you check and then uncheck the box, it will start and then 
> stop a new refresh interval while leaving the interval that was started on 
> page load running. The workaround would be to check the box, manually refresh 
> the page again, and then uncheck it (or navigate away from the page and then 
> go back)
> The fix is probably to check the value of the checkbox on page load and 
> disable auto-refresh if its unchecked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-8916) Fix auto-refresh/manual refresh interaction on webui

2019-09-20 Thread Tim Armstrong (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong reassigned IMPALA-8916:
-

Assignee: Tim Armstrong

> Fix auto-refresh/manual refresh interaction on webui
> 
>
> Key: IMPALA-8916
> URL: https://issues.apache.org/jira/browse/IMPALA-8916
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.4.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Tim Armstrong
>Priority: Major
>
> While clicking around the webui making sure everything works for the Knox 
> integration, I discovered a bug where manually refreshing a page when 
> auto-refresh is turned off leaves the auto-refresh checkbox unchecked but 
> turns auto-refresh back on, as we don't check the value of the checkbox on 
> page load but always just start with auto-refresh on.
> When this happens, it actually makes it very difficult to turn auto-refresh 
> back off, since if you check and then uncheck the box, it will start and then 
> stop a new refresh interval while leaving the interval that was started on 
> page load running. The workaround would be to check the box, manually refresh 
> the page again, and then uncheck it (or navigate away from the page and then 
> go back)
> The fix is probably to check the value of the checkbox on page load and 
> disable auto-refresh if its unchecked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-8916) Fix auto-refresh/manual refresh interaction on webui

2019-09-20 Thread Thomas Tauber-Marshall (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-8916:
--

Assignee: (was: Thomas Tauber-Marshall)

> Fix auto-refresh/manual refresh interaction on webui
> 
>
> Key: IMPALA-8916
> URL: https://issues.apache.org/jira/browse/IMPALA-8916
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.4.0
>Reporter: Thomas Tauber-Marshall
>Priority: Major
>
> While clicking around the webui making sure everything works for the Knox 
> integration, I discovered a bug where manually refreshing a page when 
> auto-refresh is turned off leaves the auto-refresh checkbox unchecked but 
> turns auto-refresh back on, as we don't check the value of the checkbox on 
> page load but always just start with auto-refresh on.
> When this happens, it actually makes it very difficult to turn auto-refresh 
> back off, since if you check and then uncheck the box, it will start and then 
> stop a new refresh interval while leaving the interval that was started on 
> page load running. The workaround would be to check the box, manually refresh 
> the page again, and then uncheck it (or navigate away from the page and then 
> go back)
> The fix is probably to check the value of the checkbox on page load and 
> disable auto-refresh if its unchecked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8916) Fix auto-refresh/manual refresh interaction on webui

2019-09-20 Thread Thomas Tauber-Marshall (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934656#comment-16934656
 ] 

Thomas Tauber-Marshall commented on IMPALA-8916:


https://gerrit.cloudera.org/#/c/14174/

> Fix auto-refresh/manual refresh interaction on webui
> 
>
> Key: IMPALA-8916
> URL: https://issues.apache.org/jira/browse/IMPALA-8916
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.4.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>
> While clicking around the webui making sure everything works for the Knox 
> integration, I discovered a bug where manually refreshing a page when 
> auto-refresh is turned off leaves the auto-refresh checkbox unchecked but 
> turns auto-refresh back on, as we don't check the value of the checkbox on 
> page load but always just start with auto-refresh on.
> When this happens, it actually makes it very difficult to turn auto-refresh 
> back off, since if you check and then uncheck the box, it will start and then 
> stop a new refresh interval while leaving the interval that was started on 
> page load running. The workaround would be to check the box, manually refresh 
> the page again, and then uncheck it (or navigate away from the page and then 
> go back)
> The fix is probably to check the value of the checkbox on page load and 
> disable auto-refresh if its unchecked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-7087) Impala is unable to read Parquet decimal columns with lower precision/scale than table metadata

2019-09-20 Thread Sahil Takiar (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934637#comment-16934637
 ] 

Sahil Takiar commented on IMPALA-7087:
--

This was partially implemented here: [https://gerrit.cloudera.org/#/c/12163/]

> Impala is unable to read Parquet decimal columns with lower precision/scale 
> than table metadata
> ---
>
> Key: IMPALA-7087
> URL: https://issues.apache.org/jira/browse/IMPALA-7087
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Tim Armstrong
>Assignee: Sahil Takiar
>Priority: Major
>  Labels: decimal, parquet
>
> This is similar to IMPALA-2515, except relates to a different precision/scale 
> in the file metadata rather than just a mismatch in the bytes used to store 
> the data. In a lot of cases we should be able to convert the decimal type on 
> the fly to the higher-precision type.
> {noformat}
> ERROR: File '/hdfs/path/00_0_x_2' column 'alterd_decimal' has an invalid 
> type length. Expecting: 11 len in file: 8
> {noformat}
> It would be convenient to allow reading parquet files where the 
> precision/scale in the file can be converted to the precision/scale in the 
> table metadata without loss of precision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-8761) Configuration validation introduced in IMPALA-8559 can be improved

2019-09-20 Thread Anurag Mantripragada (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Mantripragada closed IMPALA-8761.

Resolution: Fixed

> Configuration validation introduced in IMPALA-8559 can be improved
> --
>
> Key: IMPALA-8761
> URL: https://issues.apache.org/jira/browse/IMPALA-8761
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Assignee: Anurag Mantripragada
>Priority: Major
>
> The issue with configuration validation in IMPALA-8559 is that it validates 
> one configuration at a time and fails as soon as there is a validation error. 
> Since there are more than one configuration keys to validate, user may have 
> to restart HMS again and again if there are multiple configuration changes 
> which are needed. This is not a great user experience. A simple improvement 
> that can be made is do all the configuration validations together and then 
> present the results together in case of failures so that user can change all 
> the required changes in one go.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-8761) Configuration validation introduced in IMPALA-8559 can be improved

2019-09-20 Thread Anurag Mantripragada (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Mantripragada closed IMPALA-8761.

Resolution: Fixed

> Configuration validation introduced in IMPALA-8559 can be improved
> --
>
> Key: IMPALA-8761
> URL: https://issues.apache.org/jira/browse/IMPALA-8761
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Assignee: Anurag Mantripragada
>Priority: Major
>
> The issue with configuration validation in IMPALA-8559 is that it validates 
> one configuration at a time and fails as soon as there is a validation error. 
> Since there are more than one configuration keys to validate, user may have 
> to restart HMS again and again if there are multiple configuration changes 
> which are needed. This is not a great user experience. A simple improvement 
> that can be made is do all the configuration validations together and then 
> present the results together in case of failures so that user can change all 
> the required changes in one go.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IMPALA-8958) OutOfMemoryError : Compressed class space

2019-09-20 Thread Tim Armstrong (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934549#comment-16934549
 ] 

Tim Armstrong commented on IMPALA-8958:
---

[~hqbhoho] I don't think we have enough details about your UDF or workload to 
even attempt to reproduce. You might need to try some JVM tuning - there appear 
to be some options that affect the space reserved for classes - 
https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/considerations.html
 - MaxMetaspaceSize and CompressedClassSpaceSize

It could possibly be another symptom of IMPALA-7668, but hard to know.

> OutOfMemoryError : Compressed class space
> -
>
> Key: IMPALA-8958
> URL: https://issues.apache.org/jira/browse/IMPALA-8958
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 2.11.0
>Reporter: hqbhoho
>Priority: Major
> Attachments: f39df03636aab574edf387906cfd5f8.png
>
>
> when I use impala with hive UDF,
> impala-shell  -k -i  worker01 , I can use UDF;
> impala-shell  -k -i  worker02 , I can't use UDF,and  will  occur  
> OutOfMemoryError : Compressed class space;
> After I restart impalad which is in worker02, It will be ok! 
> But after a while ,   I also can't use UDF in worker02 again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8831) Modify ALTER TABLE to handle insert-only transactional tables

2019-09-20 Thread Csaba Ringhofer (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934386#comment-16934386
 ] 

Csaba Ringhofer commented on IMPALA-8831:
-

I think that the main modification will be to start a transaction and lock the 
table (in most cases with an exclusive lock).
Hive decides the type of lock needed depending on the ALTER TABLE's type here: 
https://github.infra.cloudera.com/CDH/hive/blob/cd21beb361785db8a8eda40be56087e06c3554c1/ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java#L198

The only command that need to call a different API for transactional tables are 
the (manual) updating of stats. The need a writeId + validWriteID list in the 
API call.

> Modify ALTER TABLE to handle insert-only transactional tables
> -
>
> Key: IMPALA-8831
> URL: https://issues.apache.org/jira/browse/IMPALA-8831
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 3.3.0
>Reporter: Gabor Kaszab
>Priority: Critical
>  Labels: ACID, impala-acid
>
> I haven't made much investigation on this topic so the first part of this 
> Jira is to do a research which ALTER TABLE commands are affected.
> Once we have a list of those then the second part is to modify them to handle 
> transactional tables properly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8951) Support ALTER TABLE for ACID tables

2019-09-20 Thread Csaba Ringhofer (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Ringhofer resolved IMPALA-8951.
-
Resolution: Duplicate

> Support ALTER TABLE for ACID tables
> ---
>
> Key: IMPALA-8951
> URL: https://issues.apache.org/jira/browse/IMPALA-8951
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Frontend
>Affects Versions: Impala 3.3.0
>Reporter: Csaba Ringhofer
>Priority: Critical
>  Labels: impala-acid
>
> ALTER TABLE commands are not allowed for ACID tables at the moment. I think 
> that nothing new has to be done for most of the commands, while some (e.g. 
> DROP PARTITION) may need to take an exclusive lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8951) Support ALTER TABLE for ACID tables

2019-09-20 Thread Csaba Ringhofer (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Ringhofer resolved IMPALA-8951.
-
Resolution: Duplicate

> Support ALTER TABLE for ACID tables
> ---
>
> Key: IMPALA-8951
> URL: https://issues.apache.org/jira/browse/IMPALA-8951
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Frontend
>Affects Versions: Impala 3.3.0
>Reporter: Csaba Ringhofer
>Priority: Critical
>  Labels: impala-acid
>
> ALTER TABLE commands are not allowed for ACID tables at the moment. I think 
> that nothing new has to be done for most of the commands, while some (e.g. 
> DROP PARTITION) may need to take an exclusive lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IMPALA-8831) Modify ALTER TABLE to handle insert-only transactional tables

2019-09-20 Thread Gabor Kaszab (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Kaszab updated IMPALA-8831:
-
Labels: ACID impala-acid  (was: ACID)

> Modify ALTER TABLE to handle insert-only transactional tables
> -
>
> Key: IMPALA-8831
> URL: https://issues.apache.org/jira/browse/IMPALA-8831
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 3.3.0
>Reporter: Gabor Kaszab
>Priority: Critical
>  Labels: ACID, impala-acid
>
> I haven't made much investigation on this topic so the first part of this 
> Jira is to do a research which ALTER TABLE commands are affected.
> Once we have a list of those then the second part is to modify them to handle 
> transactional tables properly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8951) Support ALTER TABLE for ACID tables

2019-09-20 Thread Gabor Kaszab (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934363#comment-16934363
 ] 

Gabor Kaszab commented on IMPALA-8951:
--

This is a duplicate of https://issues.apache.org/jira/browse/IMPALA-8831

> Support ALTER TABLE for ACID tables
> ---
>
> Key: IMPALA-8951
> URL: https://issues.apache.org/jira/browse/IMPALA-8951
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Frontend
>Affects Versions: Impala 3.3.0
>Reporter: Csaba Ringhofer
>Priority: Critical
>  Labels: impala-acid
>
> ALTER TABLE commands are not allowed for ACID tables at the moment. I think 
> that nothing new has to be done for most of the commands, while some (e.g. 
> DROP PARTITION) may need to take an exclusive lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8961) test_lineage_output fails in S3 builds due to connection loss towards HBase

2019-09-20 Thread Jira
Zoltán Borók-Nagy created IMPALA-8961:
-

 Summary: test_lineage_output fails in S3 builds due to connection 
loss towards HBase
 Key: IMPALA-8961
 URL: https://issues.apache.org/jira/browse/IMPALA-8961
 Project: IMPALA
  Issue Type: Bug
Reporter: Zoltán Borók-Nagy


Error details
{noformat}
ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:   MESSAGE: RuntimeException: couldn't 
retrieve HBase table (functional_hbase.alltypes) info: callTimeout=120, 
callDuration=1228255: 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /hbase row 'functional_hbase.alltypes' on table 'hbase:meta' 
at null CAUSED BY: SocketTimeoutException: callTimeout=120, 
callDuration=1228255: 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /hbase row 'functional_hbase.alltypes' on table 'hbase:meta' 
at null CAUSED BY: IOException: 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /hbase CAUSED BY: ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /hbase{noformat}
Stack trace
{noformat}
custom_cluster/test_lineage.py:114: in test_lineage_output
self.run_test_case('QueryTest/lineage', vector)
common/impala_test_suite.py:582: in run_test_case
result = exec_fn(query, user=test_section.get('USER', '').strip() or None)
common/impala_test_suite.py:517: in __exec_in_impala
result = self.__execute_query(target_impalad_client, query, user=user)
common/impala_test_suite.py:853: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:205: in execute
return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:187: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:362: in __execute_query
handle = self.execute_query_async(query_string, user=user)
beeswax/impala_beeswax.py:356: in execute_query_async
handle = self.__do_rpc(lambda: self.imp_service.query(query,))
beeswax/impala_beeswax.py:519: in __do_rpc
raise ImpalaBeeswaxException(self.__build_error_message(b), b)
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EINNER EXCEPTION: 
EMESSAGE: RuntimeException: couldn't retrieve HBase table 
(functional_hbase.alltypes) info:
E   callTimeout=120, callDuration=1228255: 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /hbase row 'functional_hbase.alltypes' on table 'hbase:meta' 
at null
E   CAUSED BY: SocketTimeoutException: callTimeout=120, 
callDuration=1228255: 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /hbase row 'functional_hbase.alltypes' on table 'hbase:meta' 
at null
E   CAUSED BY: IOException: 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /hbase
E   CAUSED BY: ConnectionLossException: KeeperErrorCode = ConnectionLoss for 
/hbase{noformat}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-8960) test_drop_if_exists fails on S3 due to incomplete URI

2019-09-20 Thread Jira
Zoltán Borók-Nagy created IMPALA-8960:
-

 Summary: test_drop_if_exists fails on S3 due to incomplete URI
 Key: IMPALA-8960
 URL: https://issues.apache.org/jira/browse/IMPALA-8960
 Project: IMPALA
  Issue Type: Bug
Reporter: Zoltán Borók-Nagy


Error Message
{noformat}
ImpalaBeeswaxException: ImpalaBeeswaxException: INNER EXCEPTION:  MESSAGE: AnalysisException: Incomplete HDFS 
URI, no host: hdfs:///test-warehouse/libTestUdfs.so CAUSED BY: IOException: 
Incomplete HDFS URI, no host: hdfs:///test-warehouse/libTestUdfs.so{noformat}
Stacktrace
{noformat}
Error Message
ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:   MESSAGE: AnalysisException: Incomplete 
HDFS URI, no host: hdfs:///test-warehouse/libTestUdfs.so CAUSED BY: 
IOException: Incomplete HDFS URI, no host: 
hdfs:///test-warehouse/libTestUdfs.soStacktrace
authorization/test_owner_privileges.py:137: in test_drop_if_exists
self._setup_drop_if_exist_test(unique_database, test_db)
authorization/test_owner_privileges.py:172: in _setup_drop_if_exist_test
self.execute_query("grant all on uri 
'hdfs:///test-warehouse/libTestUdfs.so' to"
common/impala_test_suite.py:751: in wrapper
return function(*args, **kwargs)
common/impala_test_suite.py:782: in execute_query
return self.__execute_query(self.client, query, query_options)
common/impala_test_suite.py:853: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:205: in execute
return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:187: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:362: in __execute_query
handle = self.execute_query_async(query_string, user=user)
beeswax/impala_beeswax.py:356: in execute_query_async
handle = self.__do_rpc(lambda: self.imp_service.query(query,))
beeswax/impala_beeswax.py:519: in __do_rpc
raise ImpalaBeeswaxException(self.__build_error_message(b), b)
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EINNER EXCEPTION: 
EMESSAGE: AnalysisException: Incomplete HDFS URI, no host: 
hdfs:///test-warehouse/libTestUdfs.so
E   CAUSED BY: IOException: Incomplete HDFS URI, no host: 
hdfs:///test-warehouse/libTestUdfs.so{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8960) test_drop_if_exists fails on S3 due to incomplete URI

2019-09-20 Thread Jira
Zoltán Borók-Nagy created IMPALA-8960:
-

 Summary: test_drop_if_exists fails on S3 due to incomplete URI
 Key: IMPALA-8960
 URL: https://issues.apache.org/jira/browse/IMPALA-8960
 Project: IMPALA
  Issue Type: Bug
Reporter: Zoltán Borók-Nagy


Error Message
{noformat}
ImpalaBeeswaxException: ImpalaBeeswaxException: INNER EXCEPTION:  MESSAGE: AnalysisException: Incomplete HDFS 
URI, no host: hdfs:///test-warehouse/libTestUdfs.so CAUSED BY: IOException: 
Incomplete HDFS URI, no host: hdfs:///test-warehouse/libTestUdfs.so{noformat}
Stacktrace
{noformat}
Error Message
ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:   MESSAGE: AnalysisException: Incomplete 
HDFS URI, no host: hdfs:///test-warehouse/libTestUdfs.so CAUSED BY: 
IOException: Incomplete HDFS URI, no host: 
hdfs:///test-warehouse/libTestUdfs.soStacktrace
authorization/test_owner_privileges.py:137: in test_drop_if_exists
self._setup_drop_if_exist_test(unique_database, test_db)
authorization/test_owner_privileges.py:172: in _setup_drop_if_exist_test
self.execute_query("grant all on uri 
'hdfs:///test-warehouse/libTestUdfs.so' to"
common/impala_test_suite.py:751: in wrapper
return function(*args, **kwargs)
common/impala_test_suite.py:782: in execute_query
return self.__execute_query(self.client, query, query_options)
common/impala_test_suite.py:853: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:205: in execute
return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:187: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:362: in __execute_query
handle = self.execute_query_async(query_string, user=user)
beeswax/impala_beeswax.py:356: in execute_query_async
handle = self.__do_rpc(lambda: self.imp_service.query(query,))
beeswax/impala_beeswax.py:519: in __do_rpc
raise ImpalaBeeswaxException(self.__build_error_message(b), b)
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EINNER EXCEPTION: 
EMESSAGE: AnalysisException: Incomplete HDFS URI, no host: 
hdfs:///test-warehouse/libTestUdfs.so
E   CAUSED BY: IOException: Incomplete HDFS URI, no host: 
hdfs:///test-warehouse/libTestUdfs.so{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-8959) test_union failed with wrong results on S3

2019-09-20 Thread Jira
Zoltán Borók-Nagy created IMPALA-8959:
-

 Summary: test_union failed with wrong results on S3
 Key: IMPALA-8959
 URL: https://issues.apache.org/jira/browse/IMPALA-8959
 Project: IMPALA
  Issue Type: Bug
Reporter: Zoltán Borók-Nagy


Error details
{noformat}
query_test/test_queries.py:77: in test_union 
self.run_test_case('QueryTest/union', vector) common/impala_test_suite.py:611: 
in run_test_case self.__verify_results_and_errors(vector, test_section, 
result, use_db) common/impala_test_suite.py:448: in __verify_results_and_errors 
replace_filenames_with_placeholder) common/test_result_verifier.py:456: in 
verify_raw_results VERIFIER_MAP[verifier](expected, actual) 
common/test_result_verifier.py:278: in verify_query_result_is_equal assert 
expected_results == actual_results E   assert Comparing QueryTestResults 
(expected vs actual): E 0,true,0,0,0,0,0,0,'01/01/09','0',2009-01-01 
00:00:00,2009,1 != None E 0,true,0,0,0,0,0,0,'01/01/09','0',2009-01-01 
00:00:00,2009,1 != None E 
1,false,1,1,1,10,1.10023841858,10.1,'01/01/09','1',2009-01-01 
00:01:00,2009,1 != None E 
1,false,1,1,1,10,1.10023841858,10.1,'01/01/09','1',2009-01-01 
00:01:00,2009,1 != None E 2,true,0,0,0,0,0,0,'02/01/09','0',2009-02-01 
00:00:00,2009,2 != None E 2,true,0,0,0,0,0,0,'02/01/09','0',2009-02-01 
00:00:00,2009,2 != None E 
3,false,1,1,1,10,1.10023841858,10.1,'02/01/09','1',2009-02-01 
00:01:00,2009,2 != None E 
3,false,1,1,1,10,1.10023841858,10.1,'02/01/09','1',2009-02-01 
00:01:00,2009,2 != None E 4,true,0,0,0,0,0,0,'03/01/09','0',2009-03-01 
00:00:00,2009,3 != None E 
5,false,1,1,1,10,1.10023841858,10.1,'03/01/09','1',2009-03-01 
00:01:00,2009,3 != None E Number of rows returned (expected vs actual): 10 
!= 0{noformat}
Stack trace
{noformat}
query_test/test_queries.py:77: in test_union
self.run_test_case('QueryTest/union', vector)
common/impala_test_suite.py:611: in run_test_case
self.__verify_results_and_errors(vector, test_section, result, use_db)
common/impala_test_suite.py:448: in __verify_results_and_errors
replace_filenames_with_placeholder)
common/test_result_verifier.py:456: in verify_raw_results
VERIFIER_MAP[verifier](expected, actual)
common/test_result_verifier.py:278: in verify_query_result_is_equal
assert expected_results == actual_results
E   assert Comparing QueryTestResults (expected vs actual):
E 0,true,0,0,0,0,0,0,'01/01/09','0',2009-01-01 00:00:00,2009,1 != None
E 0,true,0,0,0,0,0,0,'01/01/09','0',2009-01-01 00:00:00,2009,1 != None
E 1,false,1,1,1,10,1.10023841858,10.1,'01/01/09','1',2009-01-01 
00:01:00,2009,1 != None
E 1,false,1,1,1,10,1.10023841858,10.1,'01/01/09','1',2009-01-01 
00:01:00,2009,1 != None
E 2,true,0,0,0,0,0,0,'02/01/09','0',2009-02-01 00:00:00,2009,2 != None
E 2,true,0,0,0,0,0,0,'02/01/09','0',2009-02-01 00:00:00,2009,2 != None
E 3,false,1,1,1,10,1.10023841858,10.1,'02/01/09','1',2009-02-01 
00:01:00,2009,2 != None
E 3,false,1,1,1,10,1.10023841858,10.1,'02/01/09','1',2009-02-01 
00:01:00,2009,2 != None
E 4,true,0,0,0,0,0,0,'03/01/09','0',2009-03-01 00:00:00,2009,3 != None
E 5,false,1,1,1,10,1.10023841858,10.1,'03/01/09','1',2009-03-01 
00:01:00,2009,3 != None
E Number of rows returned (expected vs actual): 10 != 0{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8959) test_union failed with wrong results on S3

2019-09-20 Thread Jira
Zoltán Borók-Nagy created IMPALA-8959:
-

 Summary: test_union failed with wrong results on S3
 Key: IMPALA-8959
 URL: https://issues.apache.org/jira/browse/IMPALA-8959
 Project: IMPALA
  Issue Type: Bug
Reporter: Zoltán Borók-Nagy


Error details
{noformat}
query_test/test_queries.py:77: in test_union 
self.run_test_case('QueryTest/union', vector) common/impala_test_suite.py:611: 
in run_test_case self.__verify_results_and_errors(vector, test_section, 
result, use_db) common/impala_test_suite.py:448: in __verify_results_and_errors 
replace_filenames_with_placeholder) common/test_result_verifier.py:456: in 
verify_raw_results VERIFIER_MAP[verifier](expected, actual) 
common/test_result_verifier.py:278: in verify_query_result_is_equal assert 
expected_results == actual_results E   assert Comparing QueryTestResults 
(expected vs actual): E 0,true,0,0,0,0,0,0,'01/01/09','0',2009-01-01 
00:00:00,2009,1 != None E 0,true,0,0,0,0,0,0,'01/01/09','0',2009-01-01 
00:00:00,2009,1 != None E 
1,false,1,1,1,10,1.10023841858,10.1,'01/01/09','1',2009-01-01 
00:01:00,2009,1 != None E 
1,false,1,1,1,10,1.10023841858,10.1,'01/01/09','1',2009-01-01 
00:01:00,2009,1 != None E 2,true,0,0,0,0,0,0,'02/01/09','0',2009-02-01 
00:00:00,2009,2 != None E 2,true,0,0,0,0,0,0,'02/01/09','0',2009-02-01 
00:00:00,2009,2 != None E 
3,false,1,1,1,10,1.10023841858,10.1,'02/01/09','1',2009-02-01 
00:01:00,2009,2 != None E 
3,false,1,1,1,10,1.10023841858,10.1,'02/01/09','1',2009-02-01 
00:01:00,2009,2 != None E 4,true,0,0,0,0,0,0,'03/01/09','0',2009-03-01 
00:00:00,2009,3 != None E 
5,false,1,1,1,10,1.10023841858,10.1,'03/01/09','1',2009-03-01 
00:01:00,2009,3 != None E Number of rows returned (expected vs actual): 10 
!= 0{noformat}
Stack trace
{noformat}
query_test/test_queries.py:77: in test_union
self.run_test_case('QueryTest/union', vector)
common/impala_test_suite.py:611: in run_test_case
self.__verify_results_and_errors(vector, test_section, result, use_db)
common/impala_test_suite.py:448: in __verify_results_and_errors
replace_filenames_with_placeholder)
common/test_result_verifier.py:456: in verify_raw_results
VERIFIER_MAP[verifier](expected, actual)
common/test_result_verifier.py:278: in verify_query_result_is_equal
assert expected_results == actual_results
E   assert Comparing QueryTestResults (expected vs actual):
E 0,true,0,0,0,0,0,0,'01/01/09','0',2009-01-01 00:00:00,2009,1 != None
E 0,true,0,0,0,0,0,0,'01/01/09','0',2009-01-01 00:00:00,2009,1 != None
E 1,false,1,1,1,10,1.10023841858,10.1,'01/01/09','1',2009-01-01 
00:01:00,2009,1 != None
E 1,false,1,1,1,10,1.10023841858,10.1,'01/01/09','1',2009-01-01 
00:01:00,2009,1 != None
E 2,true,0,0,0,0,0,0,'02/01/09','0',2009-02-01 00:00:00,2009,2 != None
E 2,true,0,0,0,0,0,0,'02/01/09','0',2009-02-01 00:00:00,2009,2 != None
E 3,false,1,1,1,10,1.10023841858,10.1,'02/01/09','1',2009-02-01 
00:01:00,2009,2 != None
E 3,false,1,1,1,10,1.10023841858,10.1,'02/01/09','1',2009-02-01 
00:01:00,2009,2 != None
E 4,true,0,0,0,0,0,0,'03/01/09','0',2009-03-01 00:00:00,2009,3 != None
E 5,false,1,1,1,10,1.10023841858,10.1,'03/01/09','1',2009-03-01 
00:01:00,2009,3 != None
E Number of rows returned (expected vs actual): 10 != 0{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-8958) OutOfMemoryError : Compressed class space

2019-09-20 Thread hqbhoho (Jira)
hqbhoho created IMPALA-8958:
---

 Summary: OutOfMemoryError : Compressed class space
 Key: IMPALA-8958
 URL: https://issues.apache.org/jira/browse/IMPALA-8958
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 2.11.0
Reporter: hqbhoho
 Attachments: f39df03636aab574edf387906cfd5f8.png

when I use impala with hive UDF,

impala-shell  -k -i  worker01 , I can use UDF;

impala-shell  -k -i  worker02 , I can't use UDF,and  will  occur  
OutOfMemoryError : Compressed class space;

After I restart impalad which is in worker02, It will be ok! 

But after a while ,   I also can't use UDF in worker02 again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8958) OutOfMemoryError : Compressed class space

2019-09-20 Thread hqbhoho (Jira)
hqbhoho created IMPALA-8958:
---

 Summary: OutOfMemoryError : Compressed class space
 Key: IMPALA-8958
 URL: https://issues.apache.org/jira/browse/IMPALA-8958
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 2.11.0
Reporter: hqbhoho
 Attachments: f39df03636aab574edf387906cfd5f8.png

when I use impala with hive UDF,

impala-shell  -k -i  worker01 , I can use UDF;

impala-shell  -k -i  worker02 , I can't use UDF,and  will  occur  
OutOfMemoryError : Compressed class space;

After I restart impalad which is in worker02, It will be ok! 

But after a while ,   I also can't use UDF in worker02 again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)