[jira] [Assigned] (IMPALA-9095) Alter table events generated by renames are not renaming the table to a different DB.

2019-10-25 Thread Anurag Mantripragada (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Mantripragada reassigned IMPALA-9095:


Assignee: Anurag Mantripragada

> Alter table events generated by renames are not renaming the table to a 
> different DB.
> -
>
> Key: IMPALA-9095
> URL: https://issues.apache.org/jira/browse/IMPALA-9095
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Anurag Mantripragada
>Assignee: Anurag Mantripragada
>Priority: Critical
>
> Alter table renames was recently refactored. This introduced a bug where 
> rename to a different database is not applied correctly.
> Steps to reproduce:
> From Hive:
> {code:java}
> create database bug1;
> create table bug1.foo (id int);
> create database bug2;
> alter table bug1.foo rename to bug2.foo;{code}
>  
> From Impala:
> {code:java}
> use bug2;
> show tables;{code}
>  
> Expect foo to show up in bug2, it doesn't. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-9097) Some backend tests fail if the Hive Metastore is not running

2019-10-25 Thread Joe McDonnell (Jira)
Joe McDonnell created IMPALA-9097:
-

 Summary: Some backend tests fail if the Hive Metastore is not 
running
 Key: IMPALA-9097
 URL: https://issues.apache.org/jira/browse/IMPALA-9097
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Affects Versions: Impala 3.4.0
Reporter: Joe McDonnell


In our docker-based tests (i.e. docker/test_with_docker.py), we run the backend 
tests without starting a minicluster. This is now failing due to a new 
dependency on the Hive Metastore. This applies to a bunch of tests, which all 
fail with this error:
{noformat}
F0917 00:37:47.849447  7660 frontend.cc:134] IllegalStateException: 
java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.metastore.HiveMetaStoreClientF0917 
00:37:47.849447  7660 frontend.cc:134] IllegalStateException: 
java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
CAUSED BY: RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
CAUSED BY: InvocationTargetException: null
CAUSED BY: MetaException: Could not connect to meta store using any of the URIs 
provided. Most recent failure: org.apache.thrift.transport.TTransportException: 
java.net.ConnectException: Connection refused (Connection refused)
 at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
 at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:545)
 at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:303)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:80)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94)
 at 
org.apache.impala.catalog.MetaStoreClientPool$MetaStoreClient.(MetaStoreClientPool.java:99)
 at 
org.apache.impala.catalog.MetaStoreClientPool$MetaStoreClient.(MetaStoreClientPool.java:78)
 at 
org.apache.impala.catalog.MetaStoreClientPool.initClients(MetaStoreClientPool.java:174)
 at 
org.apache.impala.catalog.MetaStoreClientPool.(MetaStoreClientPool.java:163)
 at 
org.apache.impala.catalog.MetaStoreClientPool.(MetaStoreClientPool.java:155)
 at org.apache.impala.service.Frontend.(Frontend.java:301)
 at org.apache.impala.service.Frontend.(Frontend.java:270)
 at org.apache.impala.service.JniFrontend.(JniFrontend.java:141){noformat}
The list of tests that currently fail in this configuration are:

 
{noformat}
2019-10-25 14:21:36.059346 The following tests FAILED:
2019-10-25 14:21:36.0593783 - llvm-codegen-test (Child aborted)
2019-10-25 14:21:36.0594048 - hash-table-test (Failed)
2019-10-25 14:21:36.059436   11 - row-batch-list-test (Child aborted)
2019-10-25 14:21:36.059468   19 - hdfs-parquet-scanner-test (Failed)
2019-10-25 14:21:36.059491   20 - expr-test (Failed)
2019-10-25 14:21:36.059521   21 - expr-codegen-test (Child aborted)
2019-10-25 14:21:36.059551   28 - data-stream-test (Child aborted)
2019-10-25 14:21:36.059586   39 - buffered-tuple-stream-test (Child aborted)
2019-10-25 14:21:36.059613   41 - tmp-file-mgr-test (Failed)
2019-10-25 14:21:36.059644   42 - row-batch-serialize-test (Failed)
2019-10-25 14:21:36.059673   43 - row-batch-test (Child aborted)
2019-10-25 14:21:36.059710   44 - collection-value-builder-test (Child 
aborted)
2019-10-25 14:21:36.059741   45 - runtime-state-test (Child aborted)
2019-10-25 14:21:36.059774   46 - buffer-allocator-test (Child aborted)
2019-10-25 14:21:36.059804   47 - buffer-pool-test (Child aborted)
2019-10-25 14:21:36.059829   48 - free-list-test (Failed)
2019-10-25 14:21:36.059863   49 - reservation-tracker-test (Child aborted)
2019-10-25 14:21:36.059890   50 - suballocator-test (Failed)
2019-10-25 14:21:36.059917   51 - disk-io-mgr-test (Failed)
2019-10-25 14:21:36.059946   52 - data-cache-test (Child aborted)
2019-10-25 14:21:36.059977   53 - admission-controller-test (Failed)
2019-10-25 14:21:36.060015   59 - session-expiry-test (Child aborted)
2019-10-25 14:21:36.060043   67 - rpc-mgr-test (Child aborted)
2019-10-25 14:21:36.060076   68 - rpc-mgr-kerberized-test (Child aborted)
2019-10-25 

[jira] [Assigned] (IMPALA-9095) Alter table events generated by renames are not renaming the table to a different DB.

2019-10-25 Thread Anurag Mantripragada (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Mantripragada reassigned IMPALA-9095:


Assignee: (was: Anurag Mantripragada)

> Alter table events generated by renames are not renaming the table to a 
> different DB.
> -
>
> Key: IMPALA-9095
> URL: https://issues.apache.org/jira/browse/IMPALA-9095
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Anurag Mantripragada
>Priority: Critical
>
> Alter table renames was recently refactored. This introduced a bug where 
> rename to a different database is not applied correctly.
> Steps to reproduce:
> From Hive:
> {code:java}
> create database bug1;
> create table bug1.foo (id int);
> create database bug2;
> alter table bug1.foo rename to bug2.foo;{code}
>  
> From Impala:
> {code:java}
> use bug2;
> show tables;{code}
>  
> Expect foo to show up in bug2, it doesn't. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-9071) When metastore.warehouse.dir != metastore.warehouse.external.dir, Impala writes to the wrong location for external tables

2019-10-25 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960201#comment-16960201
 ] 

ASF subversion and git services commented on IMPALA-9071:
-

Commit 0f70ade0d78a9eb3afafdfd1a3e36cc8a5563cb4 in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=0f70ade ]

IMPALA-9071: Fix wrong table path of transaction table created by CTAS

The previous patch of IMPALA-9071 assumes that all tables created by
CTAS statement are non transactional table. This is wrong since CTAS
statement can also specify tblproperties so can create transactional
table.

This patch fixs the hard coded external checking. Instead, we judge on
whether the table is transactional. If not, it will be translated to
external table by HMS.

Tests:
 - Add coverage for creating transactional tables by CTAS.

Change-Id: I4b585216e33e4f7962b19ae2351165288691eaf2
Reviewed-on: http://gerrit.cloudera.org:8080/14546
Reviewed-by: Joe McDonnell 
Reviewed-by: Zoltan Borok-Nagy 
Tested-by: Impala Public Jenkins 


> When metastore.warehouse.dir != metastore.warehouse.external.dir, Impala 
> writes to the wrong location for external tables
> -
>
> Key: IMPALA-9071
> URL: https://issues.apache.org/jira/browse/IMPALA-9071
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 3.4.0
>Reporter: Joe McDonnell
>Assignee: Quanlong Huang
>Priority: Blocker
>  Labels: broken-build
>
> Hive introduced a translation layer that can convert a normal table to an 
> external table. When doing so without a specified location, the translated 
> external table uses metastore.warehouse.external.dir as the location rather 
> than metastore.warehouse.dir. Impala does not know about this distinction, so 
> it writes to the location it thinks the table should be (under 
> metastore.warehouse.dir). This means I can do the following:
> {noformat}
> [localhost:21000] joetest> select count(*) from functional.alltypes;
> Query: select count(*) from functional.alltypes
> Query submitted at: 2019-10-19 13:08:24 (Coordinator: 
> http://joemcdonnell:25000)
> Query progress can be monitored at: 
> http://joemcdonnell:25000/query_plan?query_id=68434b05e2badd50:a18a2e30
> +--+
> | count(*) |
> +--+
> | 7300 |
> +--+
> Fetched 1 row(s) in 0.14s
> [localhost:21000] joetest> create table testtable as select * from 
> functional.alltypes;
> Query: create table testtable as select * from functional.alltypes
> Query submitted at: 2019-10-19 13:08:36 (Coordinator: 
> http://joemcdonnell:25000)
> Query progress can be monitored at: 
> http://joemcdonnell:25000/query_plan?query_id=794b92fb68f36ab0:910d0364
> +--+
> | summary  |
> +--+
> | Inserted 7300 row(s) |
> +--+
> Fetched 1 row(s) in 0.50s
> [localhost:21000] joetest> select count(*) from testtable;
> Query: select count(*) from testtable
> Query submitted at: 2019-10-19 13:08:43 (Coordinator: 
> http://joemcdonnell:25000)
> Query progress can be monitored at: 
> http://joemcdonnell:25000/query_plan?query_id=66423abf016e65af:83624609
> +--+
> | count(*) |
> +--+
> | 0|
> +--+
> Fetched 1 row(s) in 0.13s
> {noformat}
> We inserted 7300 rows, but we can't select them back because they were 
> written to the wrong location.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-9022) test_query_profile_storage_load_time_filesystem is flaky

2019-10-25 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960199#comment-16960199
 ] 

ASF subversion and git services commented on IMPALA-9022:
-

Commit 8e08a2a889b7212f74045e40e3e175d689bb042d in impala's branch 
refs/heads/master from Yongzhi Chen
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=8e08a2a ]

IMPALA-9022: Fixed test_query_profile_storage_load_time_filesystem

Skip part of the test which can be affected by the random behavor
of Catalog V2. The major purpose of the test is to verify storage
load time can be in query profile when metadata loading happens,
which is not affected by the change.

Change-Id: I6ee1afec6f2b706bc28b270aad731a138662490a
Reviewed-on: http://gerrit.cloudera.org:8080/14387
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> test_query_profile_storage_load_time_filesystem is flaky
> 
>
> Key: IMPALA-9022
> URL: https://issues.apache.org/jira/browse/IMPALA-9022
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.4.0
>Reporter: Tim Armstrong
>Assignee: Yongzhi Chen
>Priority: Critical
>  Labels: broken-build, flaky
>
> This test failed a precommit run of an unrelated change for me - 
> https://jenkins.impala.io/job/ubuntu-16.04-dockerised-tests/1374/
> {noformat}
> query_test.test_observability.TestObservability.test_query_profile_storage_load_time_filesystem
>  (from pytest)
> Failing for the past 1 build (Since Failed#1374 )
> Took 19 sec.
> add description
> Error Message
> query_test/test_observability.py:682: in 
> test_query_profile_storage_load_time_filesystem cluster_properties) 
> query_test/test_observability.py:714: in 
> __check_query_profile_storage_load_time assert storageLoadTime not in 
> runtime_profile E   assert 'StorageLoad.Time' not in 'Query 
> (id=f74d7af...eTime: 43.999ms\n' E 'StorageLoad.Time' is contained here: 
> E   alogFetch.StorageLoad.Time: 0 E  - 
> CatalogFetch.TableNames.Hits: 2 E  - 
> CatalogFetch.TableNames.Requests: 2 E  - 
> CatalogFetch.TableNames.Time: 0 E  - CatalogFetch.Tables.Misses: 
> 1 E  - CatalogFetch.Tables.Requests: 1 E  - 
> CatalogFetch.Tables.Time: 0 E ImpalaServer: E Detailed 
> information truncated (569 more lines), use "-vv" to show
> Stacktrace
> query_test/test_observability.py:682: in 
> test_query_profile_storage_load_time_filesystem
> cluster_properties)
> query_test/test_observability.py:714: in 
> __check_query_profile_storage_load_time
> assert storageLoadTime not in runtime_profile
> E   assert 'StorageLoad.Time' not in 'Query (id=f74d7af...eTime: 43.999ms\n'
> E 'StorageLoad.Time' is contained here:
> E   alogFetch.StorageLoad.Time: 0
> E  - CatalogFetch.TableNames.Hits: 2
> E  - CatalogFetch.TableNames.Requests: 2
> E  - CatalogFetch.TableNames.Time: 0
> E  - CatalogFetch.Tables.Misses: 1
> E  - CatalogFetch.Tables.Requests: 1
> E  - CatalogFetch.Tables.Time: 0
> E ImpalaServer:
> E Detailed information truncated (569 more lines), use "-vv" to show
> Standard Error
> SET 
> client_identifier=query_test/test_observability.py::TestObservability::()::test_query_profile_storage_load_time_filesystem;
> SET sync_ddl=False;
> -- executing against localhost:21000
> DROP DATABASE IF EXISTS 
> `test_query_profile_storage_load_time_filesystem_dd99cc8f` CASCADE;
> -- 2019-10-08 01:48:34,019 INFO MainThread: Started query 
> 1441f1ad0a1eb1b4:8cb8cbb3
> SET 
> client_identifier=query_test/test_observability.py::TestObservability::()::test_query_profile_storage_load_time_filesystem;
> SET sync_ddl=False;
> -- executing against localhost:21000
> CREATE DATABASE `test_query_profile_storage_load_time_filesystem_dd99cc8f`;
> -- 2019-10-08 01:48:34,171 INFO MainThread: Started query 
> 7f438feb0213dffc:d2f40d7d
> -- 2019-10-08 01:48:34,177 INFO MainThread: Created database 
> "test_query_profile_storage_load_time_filesystem_dd99cc8f" for test ID 
> "query_test/test_observability.py::TestObservability::()::test_query_profile_storage_load_time_filesystem"
> -- executing against localhost:21000
> create table 
> test_query_profile_storage_load_time_filesystem_dd99cc8f.ld_prof(col1 int);
> -- 2019-10-08 01:48:34,663 INFO MainThread: Started query 
> 2943e6995d9f404d:1a375911
> -- executing against localhost:21000
> invalidate metadata 
> test_query_profile_storage_load_time_filesystem_dd99cc8f.ld_prof;
> -- 2019-10-08 01:48:34,700 INFO MainThread: Started query 
> fe480acd2fa0f4fe:31d01b6c
> -- executing against 

[jira] [Commented] (IMPALA-9065) Fix cancellation of RuntimeFilter::WaitForArrival()

2019-10-25 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960200#comment-16960200
 ] 

ASF subversion and git services commented on IMPALA-9065:
-

Commit 9100a98273aa840dc6781c446757b97db50c8b47 in impala's branch 
refs/heads/master from Tim Armstrong
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=9100a98 ]

IMPALA-9065: don't block indefinitely for filters

This patch ensures that query cancellation will
promptly wake up any threads blocked waiting for runtime
filters to arrive. Before this patch, threads would
wait for up to RUNTIME_FILTER_WAIT_TIME_MS after the
query was cancelled.

Testing:
* Add a cancellation test with a high runtime filter wait time
  that reproduces the threads getting stuck. This test
  failed reliably without the code changes.
* Also update metric verification to check that no fragments
  are left running when tests are finished.
* Ran exhaustive tests.
* Ran a 1 query TPC-H Kudu and TPC-DS Parquet stress test on
  a minicluster with 3 impalads.

Change-Id: I0a70e4451c2b48c97f854246e90b71f6e5d67710
Reviewed-on: http://gerrit.cloudera.org:8080/14499
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Fix cancellation of RuntimeFilter::WaitForArrival()
> ---
>
> Key: IMPALA-9065
> URL: https://issues.apache.org/jira/browse/IMPALA-9065
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Distributed Exec
>Affects Versions: Impala 3.3.0
>Reporter: Tim Armstrong
>Assignee: Tim Armstrong
>Priority: Major
>
> Proper cancellation wasn't ever implemented for this code path, so if the 
> wait time is set high, threads can get blocked indefinitely even if the 
> coordinator cancelled the query.
> I don't think it's hard to do the right thing -  signal the filter and wake 
> up the thread when the finstance is cancelled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-9071) When metastore.warehouse.dir != metastore.warehouse.external.dir, Impala writes to the wrong location for external tables

2019-10-25 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960202#comment-16960202
 ] 

ASF subversion and git services commented on IMPALA-9071:
-

Commit 0f70ade0d78a9eb3afafdfd1a3e36cc8a5563cb4 in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=0f70ade ]

IMPALA-9071: Fix wrong table path of transaction table created by CTAS

The previous patch of IMPALA-9071 assumes that all tables created by
CTAS statement are non transactional table. This is wrong since CTAS
statement can also specify tblproperties so can create transactional
table.

This patch fixs the hard coded external checking. Instead, we judge on
whether the table is transactional. If not, it will be translated to
external table by HMS.

Tests:
 - Add coverage for creating transactional tables by CTAS.

Change-Id: I4b585216e33e4f7962b19ae2351165288691eaf2
Reviewed-on: http://gerrit.cloudera.org:8080/14546
Reviewed-by: Joe McDonnell 
Reviewed-by: Zoltan Borok-Nagy 
Tested-by: Impala Public Jenkins 


> When metastore.warehouse.dir != metastore.warehouse.external.dir, Impala 
> writes to the wrong location for external tables
> -
>
> Key: IMPALA-9071
> URL: https://issues.apache.org/jira/browse/IMPALA-9071
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 3.4.0
>Reporter: Joe McDonnell
>Assignee: Quanlong Huang
>Priority: Blocker
>  Labels: broken-build
>
> Hive introduced a translation layer that can convert a normal table to an 
> external table. When doing so without a specified location, the translated 
> external table uses metastore.warehouse.external.dir as the location rather 
> than metastore.warehouse.dir. Impala does not know about this distinction, so 
> it writes to the location it thinks the table should be (under 
> metastore.warehouse.dir). This means I can do the following:
> {noformat}
> [localhost:21000] joetest> select count(*) from functional.alltypes;
> Query: select count(*) from functional.alltypes
> Query submitted at: 2019-10-19 13:08:24 (Coordinator: 
> http://joemcdonnell:25000)
> Query progress can be monitored at: 
> http://joemcdonnell:25000/query_plan?query_id=68434b05e2badd50:a18a2e30
> +--+
> | count(*) |
> +--+
> | 7300 |
> +--+
> Fetched 1 row(s) in 0.14s
> [localhost:21000] joetest> create table testtable as select * from 
> functional.alltypes;
> Query: create table testtable as select * from functional.alltypes
> Query submitted at: 2019-10-19 13:08:36 (Coordinator: 
> http://joemcdonnell:25000)
> Query progress can be monitored at: 
> http://joemcdonnell:25000/query_plan?query_id=794b92fb68f36ab0:910d0364
> +--+
> | summary  |
> +--+
> | Inserted 7300 row(s) |
> +--+
> Fetched 1 row(s) in 0.50s
> [localhost:21000] joetest> select count(*) from testtable;
> Query: select count(*) from testtable
> Query submitted at: 2019-10-19 13:08:43 (Coordinator: 
> http://joemcdonnell:25000)
> Query progress can be monitored at: 
> http://joemcdonnell:25000/query_plan?query_id=66423abf016e65af:83624609
> +--+
> | count(*) |
> +--+
> | 0|
> +--+
> Fetched 1 row(s) in 0.13s
> {noformat}
> We inserted 7300 rows, but we can't select them back because they were 
> written to the wrong location.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-9096) Create external table ddls should send column lineages.

2019-10-25 Thread Anurag Mantripragada (Jira)
Anurag Mantripragada created IMPALA-9096:


 Summary: Create external table ddls should send column lineages.
 Key: IMPALA-9096
 URL: https://issues.apache.org/jira/browse/IMPALA-9096
 Project: IMPALA
  Issue Type: Improvement
Reporter: Anurag Mantripragada


Create external table with specified columns should create column lineages for 
tools like Altas to consume.

 

For example:

create EXTERNAL TABLE IF NOT EXISTS friday_ext6
(STUD_ID int,
DEPT_ID int,
NAME string
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘,’
STORED AS TEXTFILE;

Currently we send a lineage like:
{code:java}
 {
 "queryText":"create EXTERNAL TABLE IF NOT EXISTS friday_ext5 (STUD_ID int, 
DEPT_IDint, NAME string ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘,’ 
STORED AS TEXTFILE LOCATION 
‘/warehouse/tablespace/external/hive/testdb.db/friday_ext5’",
 "queryId":"4b471ac0ca2b0f93:029db79c",
 "hash":"867fae20bc6c8254c05774cc923a99fa",
 "user":"admin",
 "timestamp":1572028716,
 "endTime":1572028716,
 "edges":[],
 "vertices":[],
 
"tableLocation":"hdfs://sid-cdp-2-1.gce.cloudera.com:8020/warehouse/tablespace/external/hive/testdb.db/friday_ext"
}
 {code}
Atlas needs fully qualified table name to create lineage.

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-9095) Alter table events generated by renames are not renaming the table to a different DB.

2019-10-25 Thread Anurag Mantripragada (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Mantripragada updated IMPALA-9095:
-
Description: 
Alter table renames was recently refactored. This introduced a bug where rename 
to a different database is not applied correctly.

Steps to reproduce:

>From Hive:
{code:java}
create database bug1;

create table bug1.foo (id int);

create database bug2;

alter table bug1.foo rename to bug2.foo;{code}
 

>From Impala:
{code:java}
use bug2;

show tables;{code}
 

Expect foo to show up in bug2, it doesn't. 

  was:
Alter table renames was recently refactored. This introduced a bug where rename 
to a different database is not applied correctly.

 

 


> Alter table events generated by renames are not renaming the table to a 
> different DB.
> -
>
> Key: IMPALA-9095
> URL: https://issues.apache.org/jira/browse/IMPALA-9095
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Anurag Mantripragada
>Assignee: Anurag Mantripragada
>Priority: Critical
>
> Alter table renames was recently refactored. This introduced a bug where 
> rename to a different database is not applied correctly.
> Steps to reproduce:
> From Hive:
> {code:java}
> create database bug1;
> create table bug1.foo (id int);
> create database bug2;
> alter table bug1.foo rename to bug2.foo;{code}
>  
> From Impala:
> {code:java}
> use bug2;
> show tables;{code}
>  
> Expect foo to show up in bug2, it doesn't. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-9058) S3 tests failing with FileNotFoundException getVersionMarkerItem on ../VERSION

2019-10-25 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated IMPALA-9058:
---
Issue Type: Bug  (was: Test)

> S3 tests failing with FileNotFoundException getVersionMarkerItem on ../VERSION
> --
>
> Key: IMPALA-9058
> URL: https://issues.apache.org/jira/browse/IMPALA-9058
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Critical
>
> I've seen this happen several times now, S3 tests intermittently fail with an 
> error such as:
> {code:java}
> Query aborted:InternalException: Error adding partitions E   CAUSED BY: 
> MetaException: java.io.IOException: Got exception: 
> java.io.FileNotFoundException getVersionMarkerItem on ../VERSION: 
> com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested 
> resource not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ResourceNotFoundException; Request ID: 
> 8T9IS939MDI7ASOB0IJCC34J3NVV4KQNSO5AEMVJF66Q9ASUAAJG) {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-9095) Alter table events generated by renames are not renaming the table to a different DB.

2019-10-25 Thread Anurag Mantripragada (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Mantripragada updated IMPALA-9095:
-
Priority: Blocker  (was: Critical)

> Alter table events generated by renames are not renaming the table to a 
> different DB.
> -
>
> Key: IMPALA-9095
> URL: https://issues.apache.org/jira/browse/IMPALA-9095
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Anurag Mantripragada
>Assignee: Anurag Mantripragada
>Priority: Blocker
>
> Alter table renames was recently refactored. This introduced a bug where 
> rename to a different database is not applied correctly.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-9095) Alter table events generated by renames are not renaming the table to a different DB.

2019-10-25 Thread Anurag Mantripragada (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Mantripragada updated IMPALA-9095:
-
Priority: Critical  (was: Blocker)

> Alter table events generated by renames are not renaming the table to a 
> different DB.
> -
>
> Key: IMPALA-9095
> URL: https://issues.apache.org/jira/browse/IMPALA-9095
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Anurag Mantripragada
>Assignee: Anurag Mantripragada
>Priority: Critical
>
> Alter table renames was recently refactored. This introduced a bug where 
> rename to a different database is not applied correctly.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-9095) Alter table events generated by renames are not renaming the table to a different DB.

2019-10-25 Thread Anurag Mantripragada (Jira)
Anurag Mantripragada created IMPALA-9095:


 Summary: Alter table events generated by renames are not renaming 
the table to a different DB.
 Key: IMPALA-9095
 URL: https://issues.apache.org/jira/browse/IMPALA-9095
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Reporter: Anurag Mantripragada
Assignee: Anurag Mantripragada


Alter table renames was recently refactored. This introduced a bug where rename 
to a different database is not applied correctly.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-9063) Allow building Impala against LLVM with -DLLVM_ENABLE_TERMINFO=ON

2019-10-25 Thread Tim Armstrong (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959988#comment-16959988
 ] 

Tim Armstrong commented on IMPALA-9063:
---

We don't have the source of that snapshot version published, but there is 
source from CDH releases here: https://github.com/cloudera/hive/tree/cdh6.3.1

> Allow building Impala against LLVM with -DLLVM_ENABLE_TERMINFO=ON
> -
>
> Key: IMPALA-9063
> URL: https://issues.apache.org/jira/browse/IMPALA-9063
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Donghui Xu
>Priority: Minor
>
> I failed to link impalad because of LLVM.  I had compiled and installed 
> llvm-5.0.1.
> The error message is as follows:
> usr/local/lib/libLLVMSupport.a(Process.cpp.o):Process.cpp:function 
> llvm::sys::Process::FileDescriptorHasColors(int): error: undefined reference 
> to 'setupterm'
> /usr/local/lib/libLLVMSupport.a(Process.cpp.o):Process.cpp:function 
> llvm::sys::Process::FileDescriptorHasColors(int): error: undefined reference 
> to 'tigetnum'
> /usr/local/lib/libLLVMSupport.a(Process.cpp.o):Process.cpp:function 
> llvm::sys::Process::FileDescriptorHasColors(int): error: undefined reference 
> to 'set_curterm'
> /usr/local/lib/libLLVMSupport.a(Process.cpp.o):Process.cpp:function 
> llvm::sys::Process::FileDescriptorHasColors(int): error: undefined reference 
> to 'del_curterm'
> /media/B/impala/apache/toolchain/openldap-2.4.47/lib/libldap.a(os-ip.o):os-ip.c:function
>  ldap_int_poll: warning: `sys_nerr' is deprecated; use `strerror' or 
> `strerror_r' instead
> /media/B/impala/apache/toolchain/openldap-2.4.47/lib/libldap.a(os-ip.o):os-ip.c:function
>  ldap_int_poll: warning: `sys_errlist' is deprecated; use `strerror' or 
> `strerror_r' instead
> collect2: error: ld returned 1 exit status
> be/src/service/CMakeFiles/impalad.dir/build.make:208: recipe for target 
> 'be/build/release/service/impalad' failed
> make[3]: *** [be/build/release/service/impalad] Error 1
> CMakeFiles/Makefile2:7075: recipe for target 
> 'be/src/service/CMakeFiles/impalad.dir/all' failed
> make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
> CMakeFiles/Makefile2:7087: recipe for target 
> 'be/src/service/CMakeFiles/impalad.dir/rule' failed
> make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
> Makefile:: recipe for target 'impalad' failed
> make: *** [impalad] Error 2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-9094) Update test_hms_integration.py test_compute_stats_get_to_hive to account for separate Hive/Impala statistics

2019-10-25 Thread Joe McDonnell (Jira)
Joe McDonnell created IMPALA-9094:
-

 Summary: Update test_hms_integration.py 
test_compute_stats_get_to_hive to account for separate Hive/Impala statistics
 Key: IMPALA-9094
 URL: https://issues.apache.org/jira/browse/IMPALA-9094
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Affects Versions: Impala 3.4.0
Reporter: Joe McDonnell


With newer Hive versions, Impala and Hive stats are kept separately and won't 
overwrite each other. test_hms_integration.py test_compute_stats_get_to_hive 
expects that Hive stats change when Impala does compute stats. 
test_compute_stats_get_to_impala expects that Impala stats change when Hive 
does compute stats. These tests need to be revised. Here are the example test 
failures:
{noformat}
metadata/test_hms_integration.py:486: in test_compute_stats_get_to_hive
assert hive_stats != self.hive_column_stats(table_name, 'x')
E   assert {'# col_name': 'data_type', 'col_name': 'data_type', 'x': 'int'} != 
{'# col_name': 'data_type', 'col_name': 'data_type', 'x': 'int'}
E+  where {'# col_name': 'data_type', 'col_name': 'data_type', 'x': 'int'} 
= >('zbberubbydyldirc.fkqzvzekyqsjnflk', 'x')
E+where > = 
.hive_column_stats{noformat}
If my theory is right, we should flip the test to make sure that Impala compute 
stats doesn't impact Hive and vice versa.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-9093) Fix ACID upgrade tests to account for HIVE-22158 (table translation)

2019-10-25 Thread Joe McDonnell (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959972#comment-16959972
 ] 

Joe McDonnell commented on IMPALA-9093:
---

Test failures that would hit this are:
{noformat}
query_test/test_acid.py test_acid_basic
query_test/test_acid.py test_acid_compaction
query_test/test_acid.py test_acid_partitioned{noformat}

> Fix ACID upgrade tests to account for HIVE-22158 (table translation)
> 
>
> Key: IMPALA-9093
> URL: https://issues.apache.org/jira/browse/IMPALA-9093
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 3.4.0
>Reporter: Joe McDonnell
>Priority: Blocker
>
> There are tests that create a normal managed table and upgrade that table to 
> a transactional table. For example, in test_acid.py, we run acid.test, which 
> has:
> {noformat}
> create table upgraded_table (x int);
> insert into upgraded_table values (1);
> # Upgrade to the table to insert only acid when there are already values in 
> it.
> alter table upgraded_table set tblproperties
>  ('transactional' = 'true', 'transactional_properties' = 'insert_only');
> insert into upgraded_table values (2);
> insert into upgraded_table values (3);{noformat}
> With HIVE-22158, the create table is now translated to an external table, and 
> this now fails with:
> {noformat}
> E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> EINNER EXCEPTION: 
> EMESSAGE: ImpalaRuntimeException: Error making 'alter_table' RPC to Hive 
> Metastore: 
> E   CAUSED BY: MetaException: test_acid_basic_5d04240b.upgraded_table cannot 
> be declared transactional because it's an external table{noformat}
> If external tables can't be upgraded and all managed tables are now external, 
> then this test case is invalid and can be removed. We should make sure that 
> this is how it is supposed to work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-9093) Fix ACID upgrade tests to account for HIVE-22158 (table translation)

2019-10-25 Thread Joe McDonnell (Jira)
Joe McDonnell created IMPALA-9093:
-

 Summary: Fix ACID upgrade tests to account for HIVE-22158 (table 
translation)
 Key: IMPALA-9093
 URL: https://issues.apache.org/jira/browse/IMPALA-9093
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Affects Versions: Impala 3.4.0
Reporter: Joe McDonnell


There are tests that create a normal managed table and upgrade that table to a 
transactional table. For example, in test_acid.py, we run acid.test, which has:
{noformat}
create table upgraded_table (x int);
insert into upgraded_table values (1);
# Upgrade to the table to insert only acid when there are already values in it.
alter table upgraded_table set tblproperties
 ('transactional' = 'true', 'transactional_properties' = 'insert_only');
insert into upgraded_table values (2);
insert into upgraded_table values (3);{noformat}
With HIVE-22158, the create table is now translated to an external table, and 
this now fails with:
{noformat}
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EINNER EXCEPTION: 
EMESSAGE: ImpalaRuntimeException: Error making 'alter_table' RPC to Hive 
Metastore: 
E   CAUSED BY: MetaException: test_acid_basic_5d04240b.upgraded_table cannot be 
declared transactional because it's an external table{noformat}
If external tables can't be upgraded and all managed tables are now external, 
then this test case is invalid and can be removed. We should make sure that 
this is how it is supposed to work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-9092) Fix "show create table" tests on USE_CDP_HIVE=true to account for HIVE=22158

2019-10-25 Thread Joe McDonnell (Jira)
Joe McDonnell created IMPALA-9092:
-

 Summary: Fix "show create table" tests on USE_CDP_HIVE=true to 
account for HIVE=22158
 Key: IMPALA-9092
 URL: https://issues.apache.org/jira/browse/IMPALA-9092
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Affects Versions: Impala 3.4.0
Reporter: Joe McDonnell


Hive changed behavior with HIVE-22158 so that only transactional tables are 
considered managed and all other are considered external. This means that a 
regular "create table" will result in an external table with table properties 
of 'TRANSLATED_TO_EXTERNAL'='TRUE', 'external.table.purge'='TRUE'. This breaks 
our tests that rely on "show create table", because the table is newly external 
and has extra table properties. For example:
{noformat}
query_test/test_kudu.py:842: in test_primary_key_and_distribution
db=cursor.conn.db_name, kudu_addr=KUDU_MASTER_HOSTS))
query_test/test_kudu.py:824: in assert_show_create_equals
assert cursor.fetchall()[0][0] == \
E   assert "CREATE EXTER...='localhost')" == "CREATE TABLE ...='localhost')"
E - CREATE EXTERNAL TABLE testshowcreatetable_15312_ggn1hk.nvbpxfuxze
E ?-
E + CREATE TABLE testshowcreatetable_15312_ggn1hk.nvbpxfuxze (
E ? ++
E +   c INT NOT NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION,
E +   PRIMARY KEY (c)
E + )
E + PARTITION BY HASH (c) PARTITIONS 3
E   STORED AS KUDU
E - TBLPROPERTIES ('TRANSLATED_TO_EXTERNAL'='TRUE', 
'external.table.purge'='TRUE', 'kudu.master_addresses'='localhost')
E + TBLPROPERTIES ('kudu.master_addresses'='localhost'){noformat}
We need to decide on the right behavior for "show create table" and update the 
tests. 

For Kudu tables, tables with TRANSLATED_TO_EXTERNAL=true and 
external.table.purge=TRUE should be equivalent to a non-external Kudu table, 
and we can just detect this case and generate the same SQL as before.

Other cases may need new logic. I think it makes sense to also address other 
tests due to MANAGED vs EXTERNAL distinction or extra table properties with 
this JIRA. Here is a list of tests that seem to have this problem:
{noformat}
metadata/test_ddl.py TestDdlStatements.test_create_alter_tbl_properties
metadata/test_show_create_table.py *
query_test/test_kudu.py TestShowCreateTable*
org.apache.impala.catalog.CatalogTest.testCreateTableMetadata
org.apache.impala.catalog.local.LocalCatalogTest.testKuduTable{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8995) TestAdmissionController.test_statestore_outage seems flaky

2019-10-25 Thread Bikramjeet Vig (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikramjeet Vig resolved IMPALA-8995.

Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> TestAdmissionController.test_statestore_outage seems flaky
> --
>
> Key: IMPALA-8995
> URL: https://issues.apache.org/jira/browse/IMPALA-8995
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.4.0
>Reporter: Alice Fan
>Assignee: Bikramjeet Vig
>Priority: Blocker
>  Labels: broken-build, flaky
> Fix For: Impala 3.4.0
>
>
> It appears the test failed because an expected query admission result didn't 
> happen during test statestore outage. 
> {code:java}
> Error Message
> AssertionError: Query (id=a644e4b355da6009:434111e5): DEBUG MODE 
> WARNING: Query profile created while running a DEBUG build of Impala. Use 
> RELEASE builds to measure query performance. Summary:   Session ID: 
> 3e4e788067a3a4a3:cf244f82bb27e895   Session Type: BEESWAX   Start 
> Time: 2019-10-01 01:12:50.075066000   End Time:Query Type: QUERY  
>  Query State: EXCEPTION   Query Status: Admission for query exceeded 
> timeout 6ms in pool default-pool. Queued reason: queue is not empty (size 
> 2); queued queries are executed first. Warning: admission control information 
> from statestore is stale: 1s258ms since last update was received.   
> Impala Version: impalad version 3.4.0-SNAPSHOT DEBUG (build 
> b1d0659fe69bc43508735de17d7a3b4626b7138a)   User: jenkins   Connected 
> User: jenkins   Delegated User:Network Address: 127.0.0.1:43703   
> Default Db: default   Sql Statement: select sleep(100)   
> Coordinator: 
> shared-centos64-ec2-m2-4xlarge-ondemand-0691.vpc.cloudera.com:22000   
> Query Options (set by configuration): 
> TIMEZONE=America/Los_Angeles,CLIENT_IDENTIFIER=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_statestore_outage
>Query Options (set by configuration and planner): 
> NUM_NODES=1,NUM_SCANNER_THREADS=1,RUNTIME_FILTER_MODE=0,MT_DOP=0,TIMEZONE=America/Los_Angeles,CLIENT_IDENTIFIER=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_statestore_outage
>Plan:   Max Per-Host Resource Reservation: 
> Memory=0B Threads=1   Per-Host Resource Estimates: Memory=10MB   Codegen 
> disabled by planner   Analyzed query: SELECT sleep(CAST(100 AS INT))  
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1   |  Per-Host 
> Resources: mem-estimate=0B mem-reservation=0B thread-reservation=1   
> PLAN-ROOT SINK   |  output exprs: sleep(100)   |  mem-estimate=0B 
> mem-reservation=0B thread-reservation=0   |   00:UNION  
> constant-operands=1  mem-estimate=0B mem-reservation=0B 
> thread-reservation=0  tuple-ids=0 row-size=1B cardinality=1  in 
> pipelines:   Estimated Per-Host Mem: 10485760   
> Request Pool: default-pool   Per Host Min Memory Reservation: 
> shared-centos64-ec2-m2-4xlarge-ondemand-0691.vpc.cloudera.com:22000(0)   
> Per Host Number of Fragment Instances: 
> shared-centos64-ec2-m2-4xlarge-ondemand-0691.vpc.cloudera.com:22000(1)   
> Admission result: Timed out (queued)   Initial admission queue reason: 
> waited 6 ms, reason: queue is not empty (size 2); queued queries are 
> executed first. Warning: admission control information from statestore is 
> stale: 1s258ms since last update was received.   Latest admission queue 
> reason: number of running queries 1 is at or over limit 1 (configured 
> statically) Warning: admission control information from statestore is stale: 
> 3s348ms since last update was received..   Query Compilation: 3.468ms 
>  - Metadata of all 0 tables cached: 615.388us (615.388us)  - 
> Analysis finished: 1.299ms (683.787us)  - Authorization finished 
> (noop): 1.377ms (78.537us)  - Value transfer graph computed: 1.723ms 
> (345.718us)  - Single node plan created: 1.825ms (101.771us)  
> - Distributed plan created: 1.889ms (64.414us)  - Planning finished: 
> 3.468ms (1.579ms)   Query Timeline: 1m  - Query submitted: 
> 0.000ns (0.000ns)  - Planning finished: 4.000ms (4.000ms)  - 
> Submit for admission: 5.000ms (1.000ms)  - Queued: 6.000ms (1.000ms)  
> - Completed admission: 1m (59s999ms)  - Rows available: 1m 
> (0.000ns)- ComputeScanRangeAssignmentTimer: 0.000ns   Frontend:   
>   ImpalaServer:- ClientFetchWaitTimer: 0.000ns- 
> NumRowsFetched: 0 (0)- NumRowsFetchedFromCache: 0 (0)- 
> 

[jira] [Commented] (IMPALA-9022) test_query_profile_storage_load_time_filesystem is flaky

2019-10-25 Thread Yongzhi Chen (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959943#comment-16959943
 ] 

Yongzhi Chen commented on IMPALA-9022:
--

Review of the test fixing is here:
https://gerrit.cloudera.org/#/c/14387/

> test_query_profile_storage_load_time_filesystem is flaky
> 
>
> Key: IMPALA-9022
> URL: https://issues.apache.org/jira/browse/IMPALA-9022
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.4.0
>Reporter: Tim Armstrong
>Assignee: Yongzhi Chen
>Priority: Critical
>  Labels: broken-build, flaky
>
> This test failed a precommit run of an unrelated change for me - 
> https://jenkins.impala.io/job/ubuntu-16.04-dockerised-tests/1374/
> {noformat}
> query_test.test_observability.TestObservability.test_query_profile_storage_load_time_filesystem
>  (from pytest)
> Failing for the past 1 build (Since Failed#1374 )
> Took 19 sec.
> add description
> Error Message
> query_test/test_observability.py:682: in 
> test_query_profile_storage_load_time_filesystem cluster_properties) 
> query_test/test_observability.py:714: in 
> __check_query_profile_storage_load_time assert storageLoadTime not in 
> runtime_profile E   assert 'StorageLoad.Time' not in 'Query 
> (id=f74d7af...eTime: 43.999ms\n' E 'StorageLoad.Time' is contained here: 
> E   alogFetch.StorageLoad.Time: 0 E  - 
> CatalogFetch.TableNames.Hits: 2 E  - 
> CatalogFetch.TableNames.Requests: 2 E  - 
> CatalogFetch.TableNames.Time: 0 E  - CatalogFetch.Tables.Misses: 
> 1 E  - CatalogFetch.Tables.Requests: 1 E  - 
> CatalogFetch.Tables.Time: 0 E ImpalaServer: E Detailed 
> information truncated (569 more lines), use "-vv" to show
> Stacktrace
> query_test/test_observability.py:682: in 
> test_query_profile_storage_load_time_filesystem
> cluster_properties)
> query_test/test_observability.py:714: in 
> __check_query_profile_storage_load_time
> assert storageLoadTime not in runtime_profile
> E   assert 'StorageLoad.Time' not in 'Query (id=f74d7af...eTime: 43.999ms\n'
> E 'StorageLoad.Time' is contained here:
> E   alogFetch.StorageLoad.Time: 0
> E  - CatalogFetch.TableNames.Hits: 2
> E  - CatalogFetch.TableNames.Requests: 2
> E  - CatalogFetch.TableNames.Time: 0
> E  - CatalogFetch.Tables.Misses: 1
> E  - CatalogFetch.Tables.Requests: 1
> E  - CatalogFetch.Tables.Time: 0
> E ImpalaServer:
> E Detailed information truncated (569 more lines), use "-vv" to show
> Standard Error
> SET 
> client_identifier=query_test/test_observability.py::TestObservability::()::test_query_profile_storage_load_time_filesystem;
> SET sync_ddl=False;
> -- executing against localhost:21000
> DROP DATABASE IF EXISTS 
> `test_query_profile_storage_load_time_filesystem_dd99cc8f` CASCADE;
> -- 2019-10-08 01:48:34,019 INFO MainThread: Started query 
> 1441f1ad0a1eb1b4:8cb8cbb3
> SET 
> client_identifier=query_test/test_observability.py::TestObservability::()::test_query_profile_storage_load_time_filesystem;
> SET sync_ddl=False;
> -- executing against localhost:21000
> CREATE DATABASE `test_query_profile_storage_load_time_filesystem_dd99cc8f`;
> -- 2019-10-08 01:48:34,171 INFO MainThread: Started query 
> 7f438feb0213dffc:d2f40d7d
> -- 2019-10-08 01:48:34,177 INFO MainThread: Created database 
> "test_query_profile_storage_load_time_filesystem_dd99cc8f" for test ID 
> "query_test/test_observability.py::TestObservability::()::test_query_profile_storage_load_time_filesystem"
> -- executing against localhost:21000
> create table 
> test_query_profile_storage_load_time_filesystem_dd99cc8f.ld_prof(col1 int);
> -- 2019-10-08 01:48:34,663 INFO MainThread: Started query 
> 2943e6995d9f404d:1a375911
> -- executing against localhost:21000
> invalidate metadata 
> test_query_profile_storage_load_time_filesystem_dd99cc8f.ld_prof;
> -- 2019-10-08 01:48:34,700 INFO MainThread: Started query 
> fe480acd2fa0f4fe:31d01b6c
> -- executing against localhost:21000
> select count (*) from 
> test_query_profile_storage_load_time_filesystem_dd99cc8f.ld_prof;
> -- 2019-10-08 01:48:34,768 INFO MainThread: Started query 
> c9484b86d1f78f56:4007f060
> -- executing against localhost:21000
> select count (*) from 
> test_query_profile_storage_load_time_filesystem_dd99cc8f.ld_prof;
> -- 2019-10-08 01:48:47,686 INFO MainThread: Started query 
> f74d7af8de524d65:3d7135af
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Work stopped] (IMPALA-9081) testMtDopValidationWithHDFSNumRowsEstDisabled appears flaky

2019-10-25 Thread Tim Armstrong (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-9081 stopped by Tim Armstrong.
-
> testMtDopValidationWithHDFSNumRowsEstDisabled appears flaky
> ---
>
> Key: IMPALA-9081
> URL: https://issues.apache.org/jira/browse/IMPALA-9081
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Thomas Tauber-Marshall
>Assignee: Tim Armstrong
>Priority: Critical
>  Labels: broken-build, flaky
>
> {noformat}
> Section PLAN of query:
> insert into functional_parquet.alltypes partition(year,month)
> select * from functional_parquet.alltypessmall
> Actual does not match expected result:
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> |  Per-Host Resources: mem-estimate=1.01GB mem-reservation=12.09MB 
> thread-reservation=1
> WRITE TO HDFS [functional_parquet.alltypes, OVERWRITE=false, 
> PARTITION-KEYS=(year,month)]
> |  partitions=unavailable
> ^
> |  output exprs: id, bool_col, tinyint_col, smallint_col, int_col, 
> bigint_col, float_col, double_col, date_string_col, string_col, 
> timestamp_col, year, month
> |  mem-estimate=1.00GB mem-reservation=0B thread-reservation=0
> |
> 01:SORT
> |  order by: year ASC NULLS LAST, month ASC NULLS LAST
> |  mem-estimate=12.00MB mem-reservation=12.00MB spill-buffer=2.00MB 
> thread-reservation=0
> |  tuple-ids=2 row-size=80B cardinality=unavailable
> |  in pipelines: 01(GETNEXT), 00(OPEN)
> |
> 00:SCAN HDFS [functional_parquet.alltypessmall]
>HDFS partitions=4/4 files=4 size=14.76KB
>stored statistics:
>  table: rows=unavailable size=unavailable
>  partitions: 0/4 rows=unavailable
>  columns: unavailable
>extrapolated-rows=disabled max-scan-range-rows=unavailable
>mem-estimate=16.00MB mem-reservation=88.00KB thread-reservation=0
>tuple-ids=0 row-size=80B cardinality=unavailable
>in pipelines: 00(GETNEXT)
> Expected:
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> |  Per-Host Resources: mem-estimate=1.01GB mem-reservation=12.09MB 
> thread-reservation=1
> WRITE TO HDFS [functional_parquet.alltypes, OVERWRITE=false, 
> PARTITION-KEYS=(year,month)]
> |  partitions=4
> |  output exprs: id, bool_col, tinyint_col, smallint_col, int_col, 
> bigint_col, float_col, double_col, date_string_col, string_col, 
> timestamp_col, year, month
> |  mem-estimate=1.00GB mem-reservation=0B thread-reservation=0
> |
> 01:SORT
> |  order by: year ASC NULLS LAST, month ASC NULLS LAST
> |  mem-estimate=12.00MB mem-reservation=12.00MB spill-buffer=2.00MB 
> thread-reservation=0
> |  tuple-ids=2 row-size=80B cardinality=unavailable
> |  in pipelines: 01(GETNEXT), 00(OPEN)
> |
> 00:SCAN HDFS [functional_parquet.alltypessmall]
>HDFS partitions=4/4 files=4 size=14.51KB
>stored statistics:
>  table: rows=unavailable size=unavailable
>  partitions: 0/4 rows=unavailable
>  columns missing stats: id, bool_col, tinyint_col, smallint_col, int_col, 
> bigint_col, float_col, double_col, date_string_col, string_col, timestamp_col
>extrapolated-rows=disabled max-scan-range-rows=unavailable
>mem-estimate=16.00MB mem-reservation=88.00KB thread-reservation=0
>tuple-ids=0 row-size=80B cardinality=unavailable
>in pipelines: 00(GETNEXT)
> Verbose plan:
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> |  Per-Host Resources: mem-estimate=1.01GB mem-reservation=12.09MB 
> thread-reservation=1
> WRITE TO HDFS [functional_parquet.alltypes, OVERWRITE=false, 
> PARTITION-KEYS=(year,month)]
> |  partitions=unavailable
> |  output exprs: id, bool_col, tinyint_col, smallint_col, int_col, 
> bigint_col, float_col, double_col, date_string_col, string_col, 
> timestamp_col, year, month
> |  mem-estimate=1.00GB mem-reservation=0B thread-reservation=0
> |
> 01:SORT
> |  order by: year ASC NULLS LAST, month ASC NULLS LAST
> |  mem-estimate=12.00MB mem-reservation=12.00MB spill-buffer=2.00MB 
> thread-reservation=0
> |  tuple-ids=2 row-size=80B cardinality=unavailable
> |  in pipelines: 01(GETNEXT), 00(OPEN)
> |
> 00:SCAN HDFS [functional_parquet.alltypessmall]
>HDFS partitions=4/4 files=4 size=14.76KB
>stored statistics:
>  table: rows=unavailable size=unavailable
>  partitions: 0/4 rows=unavailable
>  columns: unavailable
>extrapolated-rows=disabled max-scan-range-rows=unavailable
>mem-estimate=16.00MB mem-reservation=88.00KB thread-reservation=0
>tuple-ids=0 row-size=80B cardinality=unavailable
>in pipelines: 00(GETNEXT)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: 

[jira] [Assigned] (IMPALA-8916) Fix auto-refresh/manual refresh interaction on webui

2019-10-25 Thread Tim Armstrong (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong reassigned IMPALA-8916:
-

Assignee: (was: Tim Armstrong)

> Fix auto-refresh/manual refresh interaction on webui
> 
>
> Key: IMPALA-8916
> URL: https://issues.apache.org/jira/browse/IMPALA-8916
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.4.0
>Reporter: Thomas Tauber-Marshall
>Priority: Major
>
> While clicking around the webui making sure everything works for the Knox 
> integration, I discovered a bug where manually refreshing a page when 
> auto-refresh is turned off leaves the auto-refresh checkbox unchecked but 
> turns auto-refresh back on, as we don't check the value of the checkbox on 
> page load but always just start with auto-refresh on.
> When this happens, it actually makes it very difficult to turn auto-refresh 
> back off, since if you check and then uncheck the box, it will start and then 
> stop a new refresh interval while leaving the interval that was started on 
> page load running. The workaround would be to check the box, manually refresh 
> the page again, and then uncheck it (or navigate away from the page and then 
> go back)
> The fix is probably to check the value of the checkbox on page load and 
> disable auto-refresh if its unchecked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-9081) testMtDopValidationWithHDFSNumRowsEstDisabled appears flaky

2019-10-25 Thread Tim Armstrong (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong updated IMPALA-9081:
--
Priority: Critical  (was: Blocker)

> testMtDopValidationWithHDFSNumRowsEstDisabled appears flaky
> ---
>
> Key: IMPALA-9081
> URL: https://issues.apache.org/jira/browse/IMPALA-9081
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Thomas Tauber-Marshall
>Assignee: Tim Armstrong
>Priority: Critical
>  Labels: broken-build, flaky
>
> {noformat}
> Section PLAN of query:
> insert into functional_parquet.alltypes partition(year,month)
> select * from functional_parquet.alltypessmall
> Actual does not match expected result:
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> |  Per-Host Resources: mem-estimate=1.01GB mem-reservation=12.09MB 
> thread-reservation=1
> WRITE TO HDFS [functional_parquet.alltypes, OVERWRITE=false, 
> PARTITION-KEYS=(year,month)]
> |  partitions=unavailable
> ^
> |  output exprs: id, bool_col, tinyint_col, smallint_col, int_col, 
> bigint_col, float_col, double_col, date_string_col, string_col, 
> timestamp_col, year, month
> |  mem-estimate=1.00GB mem-reservation=0B thread-reservation=0
> |
> 01:SORT
> |  order by: year ASC NULLS LAST, month ASC NULLS LAST
> |  mem-estimate=12.00MB mem-reservation=12.00MB spill-buffer=2.00MB 
> thread-reservation=0
> |  tuple-ids=2 row-size=80B cardinality=unavailable
> |  in pipelines: 01(GETNEXT), 00(OPEN)
> |
> 00:SCAN HDFS [functional_parquet.alltypessmall]
>HDFS partitions=4/4 files=4 size=14.76KB
>stored statistics:
>  table: rows=unavailable size=unavailable
>  partitions: 0/4 rows=unavailable
>  columns: unavailable
>extrapolated-rows=disabled max-scan-range-rows=unavailable
>mem-estimate=16.00MB mem-reservation=88.00KB thread-reservation=0
>tuple-ids=0 row-size=80B cardinality=unavailable
>in pipelines: 00(GETNEXT)
> Expected:
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> |  Per-Host Resources: mem-estimate=1.01GB mem-reservation=12.09MB 
> thread-reservation=1
> WRITE TO HDFS [functional_parquet.alltypes, OVERWRITE=false, 
> PARTITION-KEYS=(year,month)]
> |  partitions=4
> |  output exprs: id, bool_col, tinyint_col, smallint_col, int_col, 
> bigint_col, float_col, double_col, date_string_col, string_col, 
> timestamp_col, year, month
> |  mem-estimate=1.00GB mem-reservation=0B thread-reservation=0
> |
> 01:SORT
> |  order by: year ASC NULLS LAST, month ASC NULLS LAST
> |  mem-estimate=12.00MB mem-reservation=12.00MB spill-buffer=2.00MB 
> thread-reservation=0
> |  tuple-ids=2 row-size=80B cardinality=unavailable
> |  in pipelines: 01(GETNEXT), 00(OPEN)
> |
> 00:SCAN HDFS [functional_parquet.alltypessmall]
>HDFS partitions=4/4 files=4 size=14.51KB
>stored statistics:
>  table: rows=unavailable size=unavailable
>  partitions: 0/4 rows=unavailable
>  columns missing stats: id, bool_col, tinyint_col, smallint_col, int_col, 
> bigint_col, float_col, double_col, date_string_col, string_col, timestamp_col
>extrapolated-rows=disabled max-scan-range-rows=unavailable
>mem-estimate=16.00MB mem-reservation=88.00KB thread-reservation=0
>tuple-ids=0 row-size=80B cardinality=unavailable
>in pipelines: 00(GETNEXT)
> Verbose plan:
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> |  Per-Host Resources: mem-estimate=1.01GB mem-reservation=12.09MB 
> thread-reservation=1
> WRITE TO HDFS [functional_parquet.alltypes, OVERWRITE=false, 
> PARTITION-KEYS=(year,month)]
> |  partitions=unavailable
> |  output exprs: id, bool_col, tinyint_col, smallint_col, int_col, 
> bigint_col, float_col, double_col, date_string_col, string_col, 
> timestamp_col, year, month
> |  mem-estimate=1.00GB mem-reservation=0B thread-reservation=0
> |
> 01:SORT
> |  order by: year ASC NULLS LAST, month ASC NULLS LAST
> |  mem-estimate=12.00MB mem-reservation=12.00MB spill-buffer=2.00MB 
> thread-reservation=0
> |  tuple-ids=2 row-size=80B cardinality=unavailable
> |  in pipelines: 01(GETNEXT), 00(OPEN)
> |
> 00:SCAN HDFS [functional_parquet.alltypessmall]
>HDFS partitions=4/4 files=4 size=14.76KB
>stored statistics:
>  table: rows=unavailable size=unavailable
>  partitions: 0/4 rows=unavailable
>  columns: unavailable
>extrapolated-rows=disabled max-scan-range-rows=unavailable
>mem-estimate=16.00MB mem-reservation=88.00KB thread-reservation=0
>tuple-ids=0 row-size=80B cardinality=unavailable
>in pipelines: 00(GETNEXT)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, 

[jira] [Work started] (IMPALA-4741) ORDER BY behavior with UNION is incorrect

2019-10-25 Thread Norbert Luksa (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-4741 started by Norbert Luksa.
-
> ORDER BY behavior with UNION is incorrect
> -
>
> Key: IMPALA-4741
> URL: https://issues.apache.org/jira/browse/IMPALA-4741
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 2.8.0
>Reporter: Greg Rahn
>Assignee: Norbert Luksa
>Priority: Critical
>  Labels: correctness, incompatibility, ramp-up, sql-language, 
> tpc-ds
> Attachments: query36a.sql, query49.sql
>
>
> When a query uses the UNION, EXCEPT, or INTERSECT operators, the ORDER BY 
> clause must be specified at the end of the statement and the results of the 
> combined queries are sorted.  ORDER BY clauses are not allowed in individual 
> branches unless the branch is enclosed by parentheses.
> There are two bugs currently:
> # An ORDER BY is allowed in a branch of a UNION that is not enclosed in 
> parentheses
> # The final ORDER BY of a UNION is attached to the nearest branch when it 
> should be sorting the combined results of the UNION(s)
> For example, this is not valid syntax but is allowed in Impala
> {code}
> select * from t1 order by 1
> union all
> select * from t2
> {code}
> And for queries like this, the ORDER BY should order the unioned result, not 
> just the nearest branch which is the current behavior.
> {code}
> select * from t1
> union all
> select * from t2
> order by 1
> {code}
> If one wants ordering within a branch, the query block must be enclosed by 
> parentheses like such:
> {code}
> (select * from t1 order by 1)
> union all
> (select * from t2 order by 2)
> {code}
> Here is an example where incorrect results are returned.
> Impala
> {code}
> [impalad:21000] > select r_regionkey, r_name from region union all select 
> r_regionkey, r_name from region order by 1 limit 2;
> +-+-+
> | r_regionkey | r_name  |
> +-+-+
> | 0   | AFRICA  |
> | 1   | AMERICA |
> | 2   | ASIA|
> | 3   | EUROPE  |
> | 4   | MIDDLE EAST |
> | 0   | AFRICA  |
> | 1   | AMERICA |
> +-+-+
> Fetched 7 row(s) in 0.12s
> {code}
> PostgreSQL
> {code}
> tpch=# select r_regionkey, r_name from region union all select r_regionkey, 
> r_name from region order by 1 limit 2;
>  r_regionkey |  r_name
> -+---
>0 | AFRICA
>0 | AFRICA
> (2 rows) 
> {code}
> see also https://cloud.google.com/spanner/docs/query-syntax#syntax_5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-9045) Filter base directories of open/aborted compactions

2019-10-25 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IMPALA-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltán Borók-Nagy reassigned IMPALA-9045:
-

Assignee: Zoltán Borók-Nagy

> Filter base directories of open/aborted compactions
> ---
>
> Key: IMPALA-9045
> URL: https://issues.apache.org/jira/browse/IMPALA-9045
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 3.3.0
>Reporter: Csaba Ringhofer
>Assignee: Zoltán Borók-Nagy
>Priority: Critical
>  Labels: impala-acid
>
> Major compactions creates directories in base_writeid_visibilityTxnId, which 
> expresses that it contains all deltas +bases <= writeId, and that the 
> compaction's transaction is visibilityTxnId. visibilityTxnId is needed to 
> check whether the compaction is open/aborted/committed, and base directories 
> belonging to open/aborted compactions should be ignored.
> Currently Impala only checks the writeId, so if there is an open/aborted 
> compaction, it will be used as base, and base/delta directories with smaller 
> writeIds will be ignored, leading to potential data loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org