[jira] [Created] (IMPALA-7511) translate() string function only works with ascii characters.

2018-08-30 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-7511:
--

 Summary: translate() string function only works with ascii 
characters.
 Key: IMPALA-7511
 URL: https://issues.apache.org/jira/browse/IMPALA-7511
 Project: IMPALA
  Issue Type: Task
Reporter: Andrew Sherman


The query

SELECT translate('Gánémílózú', 'áéíóú', 'aeiou') ;

returns "Gaenaomalaza" instead of the desired  "Ganemilozu".

It seems that translate treats strings as 8 bit characters. So it only works 
with ascii.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-7511) translate() string function only works with ascii characters.

2018-08-31 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-7511.

Resolution: Duplicate

All Impala string functions assume that strings are simple ascii.

> translate() string function only works with ascii characters.
> -
>
> Key: IMPALA-7511
> URL: https://issues.apache.org/jira/browse/IMPALA-7511
> Project: IMPALA
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> The query
> SELECT translate('Gánémílózú', 'áéíóú', 'aeiou') ;
> returns "Gaenaomalaza" instead of the desired  "Ganemilozu".
> It seems that translate treats strings as 8 bit characters. So it only works 
> with ascii.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-6568) The profile of all statements should contain the Query Compilation timeline

2018-09-14 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-6568.

Resolution: Fixed

> The profile of all statements should contain the Query Compilation timeline
> ---
>
> Key: IMPALA-6568
> URL: https://issues.apache.org/jira/browse/IMPALA-6568
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 2.9.0, Impala 2.10.0, Impala 2.11.0
>Reporter: Alexander Behm
>Assignee: Andrew Sherman
>Priority: Major
>  Labels: supportability
>
> Some statements do not seem to include the "Query Compilation" timeline in 
> the query profile.
> Repro:
> {code}
> create table t (i int);
> describe t; <-- loads the table, but no FE timeline in profile
> invalidate metadata t;
> alter table t set tbproperties('numRows'='10'); <-- loads the table, but no 
> FE timeline in profile
> {code}
> All statements should include the planner timeline.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-7579) TestObservability.test_query_profile_contains_all_events fails for S3 tests

2018-09-20 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-7579.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

Test now passes in S3

> TestObservability.test_query_profile_contains_all_events fails for S3 tests
> ---
>
> Key: IMPALA-7579
> URL: https://issues.apache.org/jira/browse/IMPALA-7579
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.1.0
>Reporter: Vuk Ercegovac
>Assignee: Andrew Sherman
>Priority: Blocker
> Fix For: Impala 3.1.0
>
>
> For S3 tests, the test introduced in [https://gerrit.cloudera.org/#/c/11387/] 
> fails with:
> {noformat}
> query_test/test_observability.py:225: in 
> test_query_profile_contains_all_events
> self.hdfs_client.delete_file_dir(path)
> util/hdfs_util.py:90: in delete_file_dir
> if not self.exists(path):
> util/hdfs_util.py:138: in exists
> self.get_file_dir_status(path)
> util/hdfs_util.py:102: in get_file_dir_status
> return super(PyWebHdfsClientWithChmod, self).get_file_dir_status(path)
> /data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/infra/python/env/lib/python2.7/site-packages/pywebhdfs/webhdfs.py:335:
>  in get_file_dir_status
> response = requests.get(uri, allow_redirects=True)
> /data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/infra/python/env/lib/python2.7/site-packages/requests/api.py:69:
>  in get
> return request('get', url, params=params, **kwargs)
> /data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/infra/python/env/lib/python2.7/site-packages/requests/api.py:50:
>  in request
> response = session.request(method=method, url=url, **kwargs)
> /data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/infra/python/env/lib/python2.7/site-packages/requests/sessions.py:465:
>  in request
> resp = self.send(prep, **send_kwargs)
> /data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/infra/python/env/lib/python2.7/site-packages/requests/sessions.py:573:
>  in send
> r = adapter.send(request, **kwargs)
> /data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/infra/python/env/lib/python2.7/site-packages/requests/adapters.py:415:
>  in send
> raise ConnectionError(err, request=request)
> E   ConnectionError: ('Connection aborted.', error(111, 'Connection 
> refused')){noformat}
> The dir delete might want to be guarded by an "if exists". The failure cases 
> may differ between hdfs and s3, which is probably what this test ran into.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-7718) [DOCS] document changes to explain output from IMPALA-5821

2018-10-16 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-7718:
--

 Summary: [DOCS] document changes to explain output from IMPALA-5821
 Key: IMPALA-7718
 URL: https://issues.apache.org/jira/browse/IMPALA-7718
 Project: IMPALA
  Issue Type: Sub-task
Reporter: Andrew Sherman
Assignee: Alex Rodoni


When [IMPALA-5821] is complete, it will change the output from 'explain ...'  
when  EXPLAIN_LEVEL is extended or higher. The header will now contain a line 
showing the query, with implicit casts as part of the text.  This should 
probably be documented in docs/topics/impala_explain_level.xml 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-7744) In PlannerTest print the name of a .test file if it diffs

2018-10-22 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-7744:
--

 Summary: In PlannerTest print the name of a .test file if it diffs
 Key: IMPALA-7744
 URL: https://issues.apache.org/jira/browse/IMPALA-7744
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


PlannerTest runs queries from .test files and writes a .test file containing 
the output under $IMPALA_FE_TEST_LOGS_DIR.  It is sometimes hard to tell which 
.test file is causing problems. If the test output is causing a test failure 
due to a diff, then print the name of the culprit .test file.

Separated out from IMPALA-5821

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-7801) Remove toSql() from ParseNode interface

2018-11-01 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-7801:
--

 Summary: Remove toSql() from ParseNode interface
 Key: IMPALA-7801
 URL: https://issues.apache.org/jira/browse/IMPALA-7801
 Project: IMPALA
  Issue Type: Improvement
Reporter: Andrew Sherman
Assignee: Andrew Sherman


In IMPALA-5821 the method "toSql(ToSqlOptions)" was added to ParseNode, to 
allow options to be passed when generating SQL from a parse tree. Now that this 
method is available, remove the old "toSql()" method and have all callers call 
the new method instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-5821) Distinguish numeric types and show implicit cast in EXTENDED explain plans

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-5821.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

> Distinguish numeric types and show implicit cast in EXTENDED explain plans
> --
>
> Key: IMPALA-5821
> URL: https://issues.apache.org/jira/browse/IMPALA-5821
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 2.8.0
>Reporter: Matthew Jacobs
>Assignee: Andrew Sherman
>Priority: Minor
>  Labels: supportability, usability
> Fix For: Impala 3.1.0
>
>
> In this plan, it wasn't clear that the constant in the predicate was being 
> evaluated to a double. Then the lhs required an implicit cast, and the 
> predicate couldn't be pushed to Kudu:
> {code}
> [localhost:21000] > explain select * from functional_kudu.alltypestiny where 
> bigint_col < 1000 / 100;
> Query: explain select * from functional_kudu.alltypestiny where bigint_col < 
> 1000 / 100
> +-+
> | Explain String  |
> +-+
> | Per-Host Resource Reservation: Memory=0B|
> | Per-Host Resource Estimates: Memory=10.00MB |
> | Codegen disabled by planner |
> | |
> | PLAN-ROOT SINK  |
> | |   |
> | 00:SCAN KUDU [functional_kudu.alltypestiny] |
> |predicates: bigint_col < 10  |
> +-+
> {code}
> We should make it more clear by printing it in a way that makes it clear that 
> it's being interpreted as a DOUBLE, e.g. by wrapping in a cast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-7801) Remove toSql() from ParseNode interface

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-7801.

Resolution: Won't Fix

> Remove toSql() from ParseNode interface
> ---
>
> Key: IMPALA-7801
> URL: https://issues.apache.org/jira/browse/IMPALA-7801
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Minor
>
> In IMPALA-5821 the method "toSql(ToSqlOptions)" was added to ParseNode, to 
> allow options to be passed when generating SQL from a parse tree. Now that 
> this method is available, remove the old "toSql()" method and have all 
> callers call the new method instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-6658) Parquet RLE encoding can waste space with small repeated runs

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-6658.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

> Parquet RLE encoding can waste space with small repeated runs
> -
>
> Key: IMPALA-6658
> URL: https://issues.apache.org/jira/browse/IMPALA-6658
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Csaba Ringhofer
>Assignee: Andrew Sherman
>Priority: Minor
>  Labels: parquet, ramp-up
> Fix For: Impala 3.1.0
>
>
> Currently RleEncoder creates repeated runs from 8 repeated values, which can 
> be less space efficient than bit-packed if bit width is 1 or 2. In the worst 
> case, the whole data page can be ~2X larger if bit width is 1, and ~1.25X 
> larger if bit is 2 compared to bit-packing.
> A comment in rle_encoding.h writes different numbers, but it probably does 
> not calculate with the overhead of splitting long runs into smaller ones 
> (every run adds +1 byte for its length): 
> [https://github.com/apache/impala/blob/8079cd9d2a87051f81a41910b74fab15e35f36ea/be/src/util/rle-encoding.h#L62]
> Note that if the data page is compressed, this size difference probably 
> disappears, but the larger uncompressed buffer size can still affect 
> performance.
> Parquet RLE encoding is described here: 
> [https://github.com/apache/parquet-format/blob/master/Encodings.md#run-length-encoding-bit-packing-hybrid-rle-3]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-7744) In PlannerTest print the name of a .test file if it diffs

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-7744.

Resolution: Won't Fix

> In PlannerTest print the name of a .test file if it diffs
> -
>
> Key: IMPALA-7744
> URL: https://issues.apache.org/jira/browse/IMPALA-7744
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Minor
>
> PlannerTest runs queries from .test files and writes a .test file containing 
> the output under $IMPALA_FE_TEST_LOGS_DIR.  It is sometimes hard to tell 
> which .test file is causing problems. If the test output is causing a test 
> failure due to a diff, then print the name of the culprit .test file.
> Separated out from IMPALA-5821
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8047) Add support for the .proto file extension to .clang-format

2019-01-04 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8047:
--

 Summary: Add support  for the .proto file extension to 
.clang-format
 Key: IMPALA-8047
 URL: https://issues.apache.org/jira/browse/IMPALA-8047
 Project: IMPALA
  Issue Type: Improvement
Reporter: Andrew Sherman


The .proto file extension is used for the Google Protocol Buffers language. 
Impala uses this language to specify the format of messages used by KRPC. Add 
support for this language to .clang-format so that we can have consistent 
formatting. 

The proposed support is:

{{Language: Proto
BasedOnStyle: Google
ColumnLimit: 90}}

This produces only a few diffs when run against the existing Impala code. I’m 
not proposing to make any changes to .proto files, this is just to show what 
clang-format will do. Apart from wrapping comments and code at 90 chars, the 
diffs are mostly of the form

{{-syntax="proto2";
+syntax = "proto2";}}

{{-  message Certificate {};
+  message Certificate {
+  };}}

{{-  optional bool client_timeout_defined = 4 [ default = false ];
+  optional bool client_timeout_defined = 4 [default = false];}}

{{-    UNKNOWN    = 999;
-    NEGOTIATE  = 1;
-    SASL_SUCCESS   = 0;
-    SASL_INITIATE  = 2;
+    UNKNOWN = 999;
+    NEGOTIATE = 1;
+    SASL_SUCCESS = 0;
+    SASL_INITIATE = 2;}}

This last change can be configured using “AlignConsecutiveAssignments: true” 
but that creates a different set of diffs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-7183) We should print the sender name when logging a report for an unknown status report on the coordinator

2019-01-09 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-7183.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> We should print the sender name when logging a report for an unknown status 
> report on the coordinator
> -
>
> Key: IMPALA-7183
> URL: https://issues.apache.org/jira/browse/IMPALA-7183
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend, Distributed Exec
>Affects Versions: Impala 2.13.0, Impala 3.1.0
>Reporter: Lars Volker
>Assignee: Andrew Sherman
>Priority: Critical
>  Labels: ramp-up
> Fix For: Impala 3.2.0
>
>
> We should print the sender name when logging a report for an unknown status 
> report on the coordinatior in 
> [impala-server.cc:1229|https://github.com/apache/impala/blob/e7d5a25a4516337ef651983b1d945abf06c3a831/be/src/service/impala-server.cc#L1229].
> That will help identify backends with stuck fragment instances who fail to 
> get cancelled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8047) Add support for the .proto file extension to .clang-format

2019-01-09 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8047.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Add support  for the .proto file extension to .clang-format
> ---
>
> Key: IMPALA-8047
> URL: https://issues.apache.org/jira/browse/IMPALA-8047
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: Impala 3.2.0
>
>
> The .proto file extension is used for the Google Protocol Buffers language. 
> Impala uses this language to specify the format of messages used by KRPC. Add 
> support for this language to .clang-format so that we can have consistent 
> formatting. 
> The proposed support is:
> {{Language: Proto
> BasedOnStyle: Google
> ColumnLimit: 90}}
> This produces only a few diffs when run against the existing Impala code. I’m 
> not proposing to make any changes to .proto files, this is just to show what 
> clang-format will do. Apart from wrapping comments and code at 90 chars, the 
> diffs are mostly of the form
> {{-syntax="proto2";
> +syntax = "proto2";}}
> {{-  message Certificate {};
> +  message Certificate {
> +  };}}
> {{-  optional bool client_timeout_defined = 4 [ default = false ];
> +  optional bool client_timeout_defined = 4 [default = false];}}
> {{-    UNKNOWN    = 999;
> -    NEGOTIATE  = 1;
> -    SASL_SUCCESS   = 0;
> -    SASL_INITIATE  = 2;
> +    UNKNOWN = 999;
> +    NEGOTIATE = 1;
> +    SASL_SUCCESS = 0;
> +    SASL_INITIATE = 2;}}
> This last change can be configured using “AlignConsecutiveAssignments: true” 
> but that creates a different set of diffs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-7468) Port CancelQueryFInstances() to KRPC

2019-01-09 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-7468.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Port CancelQueryFInstances() to KRPC
> 
>
> Key: IMPALA-7468
> URL: https://issues.apache.org/jira/browse/IMPALA-7468
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Distributed Exec
>Affects Versions: Impala 3.1.0
>Reporter: Michael Ho
>Assignee: Andrew Sherman
>Priority: Major
>  Labels: ramp-up
> Fix For: Impala 3.2.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8098) [Docs] Document incompatible changes to :shutdown command

2019-01-22 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8098:
--

 Summary: [Docs] Document incompatible changes to :shutdown command
 Key: IMPALA-8098
 URL: https://issues.apache.org/jira/browse/IMPALA-8098
 Project: IMPALA
  Issue Type: Task
Reporter: Andrew Sherman
Assignee: Alex Rodoni


The :shutdown command is used to shutdown a remote server. The common case is 
that a user specifies the impalad to shutdown by specifying a host e.g. 
:shutdown('host100'). If a user has more than one impalad on a remote host then 
the form :shutdown(':') can be used to specify the port by which 
the imapald can be contacted. Prior to IMPALA-7985 this port was the backend 
port, e.g. :shutdown('host100:22000'). With IMPALA-7985 the port to use is the 
KRPC port , e.g. :shutdown('host100:27000').



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8143) Add features to DoRpcWithRetry()

2019-01-29 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8143:
--

 Summary: Add features to DoRpcWithRetry()
 Key: IMPALA-8143
 URL: https://issues.apache.org/jira/browse/IMPALA-8143
 Project: IMPALA
  Issue Type: Task
Reporter: Andrew Sherman
Assignee: Andrew Sherman


DoRpcWithRetry() is a templated utility function that is currently in 
control-service.h that is used to retry synchronous Krpc calls. It makes a call 
to a Krpc function that is is passed as a lambda function. It sets the krpc 
timeout to the ‘krpc_timeout‘ parameter and calls the Krpc function a number of 
times controlled by the ‘times_to_try’ parameter.

Possible improvements:
 * Move code to rpc-mgr.inline.h
 * Add a configurable sleep if RpcMgr::IsServerTooBusy() says the remote 
server’s queue is full.
 * Make QueryState::ReportExecStatus() use DoRpcWithRetry()
 * Consider if asynchronous code like that in KrpcDataStreamSender::Channel  
can also use DoRpcWithRetry()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8103) Plan hints show up as "--" comments in analysed query

2019-02-06 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8103.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Plan hints show up as "--" comments in analysed query
> -
>
> Key: IMPALA-8103
> URL: https://issues.apache.org/jira/browse/IMPALA-8103
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Tim Armstrong
>Assignee: Andrew Sherman
>Priority: Minor
> Fix For: Impala 3.2.0
>
>
> I noticed that the hints added in IMPALA-5821 show up in the -- style rather 
> than /**/
> {code}
> Sql Statement: select * from tpch.lineitem join /*+ broadcast */ 
> tpch.part on l_partkey = p_partkey limit 
> ...
> Analyzed query: SELECT * FROM tpch.lineitem INNER JOIN
> -- +broadcast
> tpch.part ON l_partkey = p_partkey LIMIT CAST(5 AS TINYINT)
> {code}
> I guess this works and maybe its fine, but I was really confused when I saw 
> it. It looks like getPlanHintsSql() uses this to generate views in such a way 
> that Hive will ignore the hints, but that concern doesn't seem relevant to 
> this use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-7657) Proper codegen for TupleIsNullPredicate, IsNotEmptyPredicate and ValidTupleId

2019-02-07 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-7657.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Proper codegen for TupleIsNullPredicate, IsNotEmptyPredicate and ValidTupleId
> -
>
> Key: IMPALA-7657
> URL: https://issues.apache.org/jira/browse/IMPALA-7657
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Tim Armstrong
>Assignee: Andrew Sherman
>Priority: Major
>  Labels: codegen, performance
> Fix For: Impala 3.2.0
>
>
> These utility functions use GetCodegendComputeFnWrapper() to call the 
> interpreted path but instead we could codegen them into efficient code. We 
> could either use IRBuilder or, if possible, cross-compile the implementation 
> and substitute in constants.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8178) Tests failing with “Memory is likely oversubscribed” on EC filesystem

2019-02-08 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8178:
--

 Summary: Tests failing with “Memory is likely oversubscribed” on 
EC filesystem
 Key: IMPALA-8178
 URL: https://issues.apache.org/jira/browse/IMPALA-8178
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


In tests run against an Erasure Coding filesystem, multiple tests failed with 
memory allocation errors.

In total 10 tests failed:
 * query_test.test_scanners.TestParquet.test_decimal_encodings
 * query_test.test_scanners.TestTpchScanRangeLengths.test_tpch_scan_ranges
 * query_test.test_exprs.TestExprs.test_exprs [enable_expr_rewrites: 0]
 * query_test.test_exprs.TestExprs.test_exprs [enable_expr_rewrites: 1]
 * query_test.test_hbase_queries.TestHBaseQueries.test_hbase_scan_node
 * query_test.test_scanners.TestParquet.test_def_levels
 * 
query_test.test_scanners.TestTextSplitDelimiters.test_text_split_across_buffers_delimiterquery_test.test_hbase_queries.TestHBaseQueries.test_hbase_filters
 * query_test.test_hbase_queries.TestHBaseQueries.test_hbase_inline_views
 * query_test.test_hbase_queries.TestHBaseQueries.test_hbase_top_n

The first failure looked like this on the client side:

{quote}
F 
query_test/test_scanners.py::TestParquet::()::test_decimal_encodings[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
 query_test/test_scanners.py:717: in test_decimal_encodings
 self.run_test_case('QueryTest/parquet-decimal-formats', vector, 
unique_database)
 common/impala_test_suite.py:472: in run_test_case
 result = self.__execute_query(target_impalad_client, query, user=user)
 common/impala_test_suite.py:699: in __execute_query
 return impalad_client.execute(query, user=user)
 common/impala_connection.py:174: in execute
 return self.__beeswax_client.execute(sql_stmt, user=user)
 beeswax/impala_beeswax.py:183: in execute
 handle = self.__execute_query(query_string.strip(), user=user)
 beeswax/impala_beeswax.py:360: in __execute_query
 self.wait_for_finished(handle)
 beeswax/impala_beeswax.py:381: in wait_for_finished
 raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
 E   ImpalaBeeswaxException: ImpalaBeeswaxException:
 EQuery aborted:ExecQueryFInstances rpc 
query_id=6e44c3c949a31be2:f973c7ff failed: Failed to get minimum memory 
reservation of 8.00 KB on daemon xxx.com:22001 for query 
6e44c3c949a31be2:f973c7ff due to following error: Memory limit 
exceeded: Could not allocate memory while trying to increase reservation.
 E   Query(6e44c3c949a31be2:f973c7ff) could not allocate 8.00 KB 
without exceeding limit.
 E   Error occurred on backend xxx.com:22001
 E   Memory left in process limit: 1.19 GB
 E   Query(6e44c3c949a31be2:f973c7ff): Reservation=0 
ReservationLimit=9.60 GB OtherMemory=0 Total=0 Peak=0
 E   Memory is likely oversubscribed. Reducing query concurrency or configuring 
admission control may help avoid this error.
{quote}


On the server side log:

{quote}
I0207 18:25:19.329311  5562 impala-server.cc:1063] 
6e44c3c949a31be2:f973c7ff] Registered query 
query_id=6e44c3c949a31be2:f973c7ff 
session_id=93497065f69e9d01:8a3bd06faff3da5
I0207 18:25:19.329434  5562 Frontend.java:1242] 
6e44c3c949a31be2:f973c7ff] Analyzing query: select score from 
decimal_stored_as_int32
I0207 18:25:19.329583  5562 FeSupport.java:285] 
6e44c3c949a31be2:f973c7ff] Requesting prioritized load of table(s): 
test_decimal_encodings_28d99c0e.decimal_stored_as_int32
I0207 18:25:30.776041  5562 Frontend.java:1282] 
6e44c3c949a31be2:f973c7ff] Analysis finished.
I0207 18:25:35.919486 10418 admission-controller.cc:608] 
6e44c3c949a31be2:f973c7ff] Schedule for 
id=6e44c3c949a31be2:f973c7ff in pool_name=default-pool 
per_host_mem_estimate=16.02 MB PoolConfig: max_requests=-1 max_queued=200 
max_mem=-1.00 B
I0207 18:25:35.919528 10418 admission-controller.cc:613] 
6e44c3c949a31be2:f973c7ff] Stats: agg_num_running=2, agg_num_queued=0, 
agg_mem_reserved=24.13 MB,  local_host(local_mem_admitted=1.99 GB, 
num_admitted_running=2, num_queued=0, backend_mem_reserved=8.06 MB)
I0207 18:25:35.919549 10418 admission-controller.cc:645] 
6e44c3c949a31be2:f973c7ff] Admitted query 
id=6e44c3c949a31be2:f973c7ff
I0207 18:25:35.920532 10418 coordinator.cc:93] 
6e44c3c949a31be2:f973c7ff] Exec() 
query_id=6e44c3c949a31be2:f973c7ff stmt=select score from 
decimal_stored_as_int32
I0207 18:25:35.930855 10418 coordinator.cc:359] 
6e44c3c949a31be2:f973c7ff] starting execution on 2 backends for 
query_id=6e44c3c949a31be2:f973c7ff
I0207 18:25:35.938108 21110 impala-internal-service

[jira] [Resolved] (IMPALA-7985) Port RemoteShutdown() to KRPC

2019-02-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-7985.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Port RemoteShutdown() to KRPC
> -
>
> Key: IMPALA-7985
> URL: https://issues.apache.org/jira/browse/IMPALA-7985
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Distributed Exec
>Affects Versions: Impala 3.1.0
>Reporter: Michael Ho
>Assignee: Andrew Sherman
>Priority: Major
>  Labels: ramp-up
> Fix For: Impala 3.2.0
>
>
> Port RemoteShutdown() to KRPC. It's currently implemented as Thrift RPC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8183) TestRPCTimeout.test_reportexecstatus_retry times out

2019-02-11 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8183:
--

 Summary: TestRPCTimeout.test_reportexecstatus_retry times out
 Key: IMPALA-8183
 URL: https://issues.apache.org/jira/browse/IMPALA-8183
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


There are 2 forms of failure, where the test itself times out, and where the 
whole test run times out, suspiciously just after running 
test_reportexecstatus_retry

{quote}
Error Message
Failed: Timeout >7200s
Stacktrace
custom_cluster/test_rpc_timeout.py:143: in test_reportexecstatus_retry
self.execute_query_verify_metrics(self.TEST_QUERY, None, 10)
custom_cluster/test_rpc_timeout.py:45: in execute_query_verify_metrics
self.execute_query(query, query_options)
common/impala_test_suite.py:601: in wrapper
return function(*args, **kwargs)
common/impala_test_suite.py:632: in execute_query
return self.__execute_query(self.client, query, query_options)
common/impala_test_suite.py:699: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:174: in execute
return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:183: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:360: in __execute_query
self.wait_for_finished(handle)
beeswax/impala_beeswax.py:384: in wait_for_finished
time.sleep(0.05)
E   Failed: Timeout >7200s
{quote}

{quote}
Test run timed out. This probably happened due to a hung thread which can be 
confirmed by looking at the stacktrace of running impalad processes at 
/data/jenkins/workspace/xxx/repos/Impala/logs/timeout_stacktrace
{quote}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8189) TestParquet.test_resolution_by_name fails on S3 because 'hadoop fs -cp' fails

2019-02-12 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8189:
--

 Summary: TestParquet.test_resolution_by_name fails on S3 because 
'hadoop fs -cp'  fails
 Key: IMPALA-8189
 URL: https://issues.apache.org/jira/browse/IMPALA-8189
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


In parquet-resolution-by-name.test a parquet file is copied. 
{quote}
 SHELL
hadoop fs -cp 
$FILESYSTEM_PREFIX/test-warehouse/complextypestbl_parquet/nullable.parq \
$FILESYSTEM_PREFIX/test-warehouse/$DATABASE.db/nested_resolution_by_name_test/
hadoop fs -cp 
$FILESYSTEM_PREFIX/test-warehouse/complextypestbl_parquet/nonnullable.parq \
$FILESYSTEM_PREFIX/test-warehouse/$DATABASE.db/nested_resolution_by_name_test/
{quote}

The first copy succeeds, but the second fails. In the DEBUG output (below) you 
can see the copy writing data to an intermediate file 
test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_
 and then after the stream is closed, the copy cannot find the file.

{quote}
19/02/12 05:33:13 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://impala-test-uswest2-1/test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_
  
(test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_)
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: object_metadata_requests += 1 
 ->  7
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: object_metadata_requests += 1 
 ->  8
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: object_list_requests += 1  -> 
 3
19/02/12 05:33:13 DEBUG s3a.S3AFileSystem: Not Found: 
s3a://impala-test-uswest2-1/test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: op_create += 1  ->  1
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: op_get_file_status += 1  ->  6
19/02/12 05:33:13 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://impala-test-uswest2-1/test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_
  
(test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_)
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: object_metadata_requests += 1 
 ->  9
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: object_metadata_requests += 1 
 ->  10
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: object_list_requests += 1  -> 
 4
19/02/12 05:33:13 DEBUG s3a.S3AFileSystem: Not Found: 
s3a://impala-test-uswest2-1/test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_
19/02/12 05:33:13 DEBUG s3a.S3ABlockOutputStream: Initialized 
S3ABlockOutputStream for 
test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_
 output to FileBlock{index=1, 
destFile=/tmp/hadoop-jenkins/s3a/s3ablock-0001-1315190405959387081.tmp, 
state=Writing, dataSize=0, limit=104857600}
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: op_get_file_status += 1  ->  7
19/02/12 05:33:13 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://impala-test-uswest2-1/test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_
  
(test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_)
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: object_metadata_requests += 1 
 ->  11
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: object_metadata_requests += 1 
 ->  12
19/02/12 05:33:13 DEBUG s3a.S3AStorageStatistics: object_list_requests += 1  -> 
 5
19/02/12 05:33:13 DEBUG s3a.S3AFileSystem: Not Found: 
s3a://impala-test-uswest2-1/test-warehouse/test_resolution_by_name_daec05d5.db/nested_resolution_by_name_test/nonnullable.parq._COPYING_
19/02/12 05:33:13 DEBUG s3a.S3AInputStream: 
reopen(s3a://impala-test-uswest2-1/test-warehouse/complextypestbl_parquet/nonnullable.parq)
 for read from new offset range[0-3186], length=4096, streamPosition=0, 
nextReadPosition=0, policy=normal
19/02/12 05:33:13 DEBUG s3a.S3ABlockOutputStream: 
S3ABlockOutputStream{WriteOperationHelper {bucket=impala-test-uswest2-1}, 
blockSize=104857600, activeBlock=FileBlock{index=1, 
destFile=/tmp/hadoop-jenkins/s3a/s3ablock-0001-1315190405959387081.tmp, 
state=Writing, dataSize=3186, limit=104857600}}: Closing block #1: current 
block= FileBlock{index=1, 
destFile=/tmp/hadoop-jenkins/s3a/s3ablock-0001-1315190405959387081.tmp, 
state=Writing, dataSize=3186, limit=104857600}
19/02/12 05:33:13 DEBUG s3a.S3ABlockOutputStream: Executing regular upload for 
WriteOperationHelper {bucket=impala-test-uswest2-1}
19/02/12 05:33:13 DEBUG s3a.S3ADataBlocks: Start datablock[1] upload
19/02/12 05:33:13 DEBUG s3a.S3ADataBlocks: FileBlock{i

[jira] [Created] (IMPALA-8190) Query on kudu table failed with 'Network error: recv error from 0.0.0.0:0: Transport endpoint is not connected (error 107)'

2019-02-12 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8190:
--

 Summary: Query on kudu table failed with 'Network error: recv 
error from 0.0.0.0:0: Transport endpoint is not connected (error 107)'
 Key: IMPALA-8190
 URL: https://issues.apache.org/jira/browse/IMPALA-8190
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


query_test.test_exprs.TestExprs.test_exprs failed, apparently because the 
impalad could not communicate with kudu

{quote}
I0212 00:31:23.577960 19873 query-state.cc:624] 
9d4b9e304c9e5939:875e611d0002] Executing instance. 
instance_id=9d4b9e304c9e5939:875e611d0002 fragment_idx=3 
per_fragment_instance_idx=0 coord_state_idx=1 #in-flight=6
I0212 00:31:23.590092 19842 query-state.cc:632] 
1040c1d1f33cc9b8:72e8231d0003] Instance completed. 
instance_id=1040c1d1f33cc9b8:72e8231d0003 #in-flight=5 status=OK
I0212 00:31:23.603977 19354 stopwatch.h:190] bb48cb270b9b0b6c:68821dde0003] 
WARNING: time went backwards from 40193644228664 to 40193644228661
I0212 00:31:23.627945 19873 query-state.cc:632] 
9d4b9e304c9e5939:875e611d0002] Instance completed. 
instance_id=9d4b9e304c9e5939:875e611d0002 #in-flight=4 status=OK
I0212 00:31:23.628665 19876 krpc-data-stream-mgr.cc:294] 
9d4b9e304c9e5939:875e611d0003] DeregisterRecvr(): 
fragment_instance_id=9d4b9e304c9e5939:875e611d0003, node=4
I0212 00:31:23.636126 19354 stopwatch.h:190] bb48cb270b9b0b6c:68821dde0003] 
WARNING: time went backwards from 40193676228721 to 40193676228718
I0212 00:31:23.650511 19871 client-internal.cc:219] 
9d4b9e304c9e5939:875e611d0001] Unable to send the request (table { 
table_name: "impala::functional_kudu.alltypes" }) to leader Master 
(localhost:7051): Network error: recv error from 0.0.0.0:0: Transport endpoint 
is not connected (error 107)
I0212 00:31:23.831434 19354 stopwatch.h:190] bb48cb270b9b0b6c:68821dde0003] 
WARNING: time went backwards from 40193872229014 to 40193872229013
I0212 00:31:23.831465 19354 stopwatch.h:190] bb48cb270b9b0b6c:68821dde0003] 
WARNING: time went backwards from 40193872229014 to 40193872229013
I0212 00:31:23.872483  9184 krpc-data-stream-mgr.cc:408] Reduced stream ID 
cache from 953 items, to 897, eviction took: 0
I0212 00:31:24.273738 19871 status.cc:124] 9d4b9e304c9e5939:875e611d0001] 
Unable to open Kudu table: Network error: recv error from 0.0.0.0:0: Transport 
endpoint is not connected (error 107)
@  0x1a612f6  impala::Status::Status()
@  0x23b28e0  impala::KuduScanNodeBase::Open()
@  0x23ae8c9  impala::KuduScanNode::Open()
@  0x1f4f0b3  impala::FragmentInstanceState::Open()
@  0x1f4bd77  impala::FragmentInstanceState::Exec()
@  0x1f5f35d  impala::QueryState::ExecFInstance()
@  0x1f5d63f  _ZZN6impala10QueryState15StartFInstancesEvENKUlvE_clEv
@  0x1f6079e  
_ZN5boost6detail8function26void_function_obj_invoker0IZN6impala10QueryState15StartFInstancesEvEUlvE_vE6invokeERNS1_15function_bufferE
@  0x1d736b5  boost::function0<>::operator()()
@  0x22202a2  impala::Thread::SuperviseThread()
@  0x2228626  boost::_bi::list5<>::operator()<>()
@  0x222854a  boost::_bi::bind_t<>::operator()()
@  0x222850d  boost::detail::thread_data<>::run()
@  0x3711249  thread_proxy
@   0x3accc07850  (unknown)
@   0x3acc8e894c  (unknown)
I0212 00:31:24.274066 19871 query-state.cc:632] 
9d4b9e304c9e5939:875e611d0001] Instance completed. 
instance_id=9d4b9e304c9e5939:875e611d0001 #in-flight=3 status=GENERAL: 
Unable to open Kudu table: Network error: recv error from 0.0.0.0:0: Transport 
endpoint is not connected (error 107)
{quote}

There was a similar error in [IMPALA-8112]

Test failure was:

{quote}
query_test.test_exprs.TestExprs.test_exprs[protocol: beeswax | exec_option: 
{'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
'disable_codegen': True, 'abort_on_error': 1, 
'exec_single_node_rows_threshold': 0} | table_format: seq/gzip/block | 
enable_expr_rewrites: 1] (from pytest)

Failing for the past 1 build (Since Failed#257 )
Took 1 min 3 sec.
add description
Error Message
query_test/test_exprs.py:58: in test_exprs 
self.run_test_case('QueryTest/exprs', vector) common/impala_test_suite.py:472: 
in run_test_case result = self.__execute_query(target_impalad_client, 
query, user=user) common/impala_test_suite.py:699: in __execute_query 
return impalad_client.execute(query, user=user) 
common/impala_connection.py:174: in execute return 
self.__beeswax_client.execute(sql_stmt, user=user) 
beeswax/impala_beeswax.py:183: in execute handle = 
self.__execute_query(query_string.strip(), user=user) 
beeswax/impala_beeswax.py:360: in __execute_query 
self.wait_for_finished(handle) beeswax/impala_beeswax.py:381: in 
wait_for_finished rai

[jira] [Created] (IMPALA-8191) TestBreakpadExhaustive.test_minidump_creation fails to kill cluster

2019-02-12 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8191:
--

 Summary: TestBreakpadExhaustive.test_minidump_creation fails to 
kill cluster
 Key: IMPALA-8191
 URL: https://issues.apache.org/jira/browse/IMPALA-8191
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman



h3. Error Message

{quote}
assert not [, 
] + where 
[, 
] = 
.impalads + 
where  = 
.cluster
{quote}

h3. Stacktrace

{quote}
custom_cluster/test_breakpad.py:183: in test_minidump_creation 
self.kill_cluster(SIGSEGV) custom_cluster/test_breakpad.py:81: in kill_cluster 
signal is SIGUSR1 or self.assert_all_processes_killed() 
custom_cluster/test_breakpad.py:121: in assert_all_processes_killed assert not 
self.cluster.impalads E assert not [, ] E + where [, ] = 
.impalads E + 
where  = 
.cluster
{quote}

See [IMPALA-8114] for a similar bug



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8192) finalize.sh emits error on Centos6 as bash does not have 'test -v' until bash 4.2

2019-02-12 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8192:
--

 Summary: finalize.sh emits error on Centos6 as bash does not have 
'test -v' until bash 4.2 
 Key: IMPALA-8192
 URL: https://issues.apache.org/jira/browse/IMPALA-8192
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Jim Apple


08:07:24 + /data/jenkins/workspace/xxx/repos/Impala/bin/jenkins/finalize.sh
08:07:24 /data/jenkins/workspace/xxx/repos/Impala/bin/jenkins/finalize.sh: line 
24: test: -v: unary operator expected

This is new code from [IMPALA-5031]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8193) junitxml_prune_notrun.py fails on Centos6 passing xml_declaration=True to ElementTree

2019-02-12 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8193:
--

 Summary: junitxml_prune_notrun.py fails on Centos6 passing 
xml_declaration=True to ElementTree 
 Key: IMPALA-8193
 URL: https://issues.apache.org/jira/browse/IMPALA-8193
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Joe McDonnell


I assume this fails because the version of python on Centos6 is old.

{quote}
[==] Running 5 tests from 1 test case.
[--] Global test environment set-up.
[--] 5 tests from Bitmap
[ RUN  ] Bitmap.SetupTest
[   OK ] Bitmap.SetupTest (0 ms)
[ RUN  ] Bitmap.SetAllTest
[   OK ] Bitmap.SetAllTest (0 ms)
[ RUN  ] Bitmap.SetGetTest
[   OK ] Bitmap.SetGetTest (1 ms)
[ RUN  ] Bitmap.OverflowTest
[   OK ] Bitmap.OverflowTest (0 ms)
[ RUN  ] Bitmap.MemUsage
[   OK ] Bitmap.MemUsage (0 ms)
[--] 5 tests from Bitmap (1 ms total)

[--] Global test environment tear-down
[==] 5 tests from 1 test case ran. (1 ms total)
[  PASSED  ] 5 tests.

  YOU HAVE 1 DISABLED TEST

19/02/11 15:53:03 INFO util.JvmPauseMonitor: Starting JVM pause monitor
Traceback (most recent call last):
  File 
"/data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/bin/junitxml_prune_notrun.py",
 line 65, in 
if __name__ == "__main__": main()
  File 
"/data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/bin/junitxml_prune_notrun.py",
 line 62, in main
junitxml_prune_notrun(options.filename)
  File 
"/data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/bin/junitxml_prune_notrun.py",
 line 55, in junitxml_prune_notrun
tree.write(junitxml_filename, encoding="utf-8", xml_declaration=True)
TypeError: write() got an unexpected keyword argument 'xml_declaration'
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8194) TestPauseMonitor.test_jvm_pause_monitor_logs_entries needs to wait longer to see output

2019-02-12 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8194:
--

 Summary: TestPauseMonitor.test_jvm_pause_monitor_logs_entries 
needs to wait longer to see output
 Key: IMPALA-8194
 URL: https://issues.apache.org/jira/browse/IMPALA-8194
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


TestPauseMonitor.test_jvm_pause_monitor_logs_entries complains:

{quote}
FAIL 
custom_cluster/test_pause_monitor.py::TestPauseMonitor::()::test_jvm_pause_monitor_logs_entries
=== FAILURES ===
_ TestPauseMonitor.test_jvm_pause_monitor_logs_entries _
custom_cluster/test_pause_monitor.py:38: in test_jvm_pause_monitor_logs_entries
self.assert_impalad_log_contains('INFO', "Detected pause in JVM or host 
machine")
common/custom_cluster_test_suite.py:216: in assert_impalad_log_contains
self.assert_log_contains("impalad", level, line_regex, expected_count)
common/custom_cluster_test_suite.py:248: in assert_log_contains
(expected_count, log_file_path, line_regex, found, line)
E   AssertionError: Expected 1 lines in file 
/data0/jenkins/workspace/xxx/repos/Impala/logs/custom_cluster_tests/impalad.impala-xxx.jenkins.log.INFO.20190211-183351.2092
 matching regex 'Detected pause in JVM or host machine', but found 0 lines. 
Last line was: 
E   I0211 18:34:03.563254  2392 thrift-util.cc:113] TAcceptQueueServer client 
died: write() send(): Broken pipe
{quote}

The actual log (archived later) contains:

{quote}
I0211 18:34:03.563254  2392 thrift-util.cc:113] TAcceptQueueServer client died: 
write() send(): Broken pipe
I0211 18:34:03.565459  2164 JvmPauseMonitor.java:187] Detected pause in JVM or 
host machine (eg GC): pause of approximately 4550ms
No GCs detected
{quote}

so if test_jvm_pause_monitor_logs_entries had waited for 3ms more it would have 
seen the line it was looking for (assuming the simplest explanation). 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8200) Builds fail using wrong branch of impala-lzo

2019-02-13 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8200:
--

 Summary: Builds fail using wrong branch of impala-lzo
 Key: IMPALA-8200
 URL: https://issues.apache.org/jira/browse/IMPALA-8200
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Reporter: Andrew Sherman
Assignee: Andrew Sherman
 Fix For: Impala 3.2.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8200) Builds fail using wrong branch of impala-lzo

2019-02-13 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8200.

Resolution: Fixed

Fixed by force-pushing the correct change to upstream as well as internal 
gerrit.

> Builds fail using wrong branch of impala-lzo
> 
>
> Key: IMPALA-8200
> URL: https://issues.apache.org/jira/browse/IMPALA-8200
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Critical
>  Labels: broken-build
> Fix For: Impala 3.2.0
>
>
> An earlier build succeeded using this commit from the branch 'master' 
> {quote}
> commit dccb1be88a5e237b06ae69cd99b048a38d9f024b (HEAD -> master)
> Author: Philip Zeyliger 
> AuthorDate: Tue Jan 22 14:21:42 2019 -0800
> Commit: Philip Zeyliger 
> CommitDate: Fri Feb 1 11:04:06 2019 -0800
> Adapt to new interface for AddDiskIoRange (IMPALA-7980)
> 
> The change for IMPALA-7980 moves out the "are we done
> adding disk io ranges" accounting from AddDiskIoRange()
> to a new UpdateRemainingScanRangeSubmissions call.
> {quote}
> but now the build is failing pointing at 'origin/master:
> {quote}
> commit 9543908ec82245878c4060dec64a1533d7adbee6 
> (origin/master-backup-2019-01-23, origin/master)
> Author: Alex Behm 
> AuthorDate: Thu Dec 4 16:53:06 2014 -0800
> Commit: Alex Behm 
> CommitDate: Fri Dec 5 14:28:25 2014 -0800
> Remove data compaction flag.
> 
> Change-Id: Ife72c7ee630b6ecd9024a3d33b7673b56fcf1fba
> {quote}
> which is obviously something pretty old and wrong



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8202) TestAdmissionControllerStress.test_mem_limit teardown() fails with "Invalid or unknown query handle"

2019-02-14 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8202:
--

 Summary: TestAdmissionControllerStress.test_mem_limit teardown() 
fails with "Invalid or unknown query handle"
 Key: IMPALA-8202
 URL: https://issues.apache.org/jira/browse/IMPALA-8202
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 3.2.0
Reporter: Andrew Sherman


teardown() attempts to close each submission thread that was used. But one of 
them times out.

{quote}
06:05:22  ERROR at teardown of 
TestAdmissionControllerStress.test_mem_limit[num_queries: 50 | protocol: 
beeswax | table_format: text/none | exec_option: {'batch_size': 0, 'num_nodes': 
0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | 
submission_delay_ms: 50 | round_robin_submission: True] 
06:05:22 custom_cluster/test_admission_controller.py:1004: in teardown
06:05:22 client.cancel(thread.query_handle)
06:05:22 common/impala_connection.py:183: in cancel
06:05:22 return 
self.__beeswax_client.cancel_query(operation_handle.get_handle())
06:05:22 beeswax/impala_beeswax.py:364: in cancel_query
06:05:22 return self.__do_rpc(lambda: self.imp_service.Cancel(query_id))
06:05:22 beeswax/impala_beeswax.py:512: in __do_rpc
06:05:22 raise ImpalaBeeswaxException(self.__build_error_message(b), b)
06:05:22 E   ImpalaBeeswaxException: ImpalaBeeswaxException:
06:05:22 EINNER EXCEPTION: 
06:05:22 EMESSAGE: Invalid or unknown query handle
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8112) test_cancel_select with debug action failed with unexpected error

2019-02-20 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8112.

Resolution: Cannot Reproduce

Test code is OK, underlying problem seem to be Kudu problem logged as  
IMPALA-8190

> test_cancel_select with debug action failed with unexpected error
> -
>
> Key: IMPALA-8112
> URL: https://issues.apache.org/jira/browse/IMPALA-8112
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Michael Brown
>Assignee: Andrew Sherman
>Priority: Major
>  Labels: flaky
>
> Stacktrace
> {noformat}
> query_test/test_cancellation.py:241: in test_cancel_select
> self.execute_cancel_test(vector)
> query_test/test_cancellation.py:213: in execute_cancel_test
> assert 'Cancelled' in str(thread.fetch_results_error)
> E   assert 'Cancelled' in "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Unable to open Kudu table: 
> Network error: recv error from 0.0.0.0:0: Transport endpoint is not connected 
> (error 107)\n"
> E+  where "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Unable to open Kudu table: 
> Network error: recv error from 0.0.0.0:0: Transport endpoint is not connected 
> (error 107)\n" = str(ImpalaBeeswaxException())
> E+where ImpalaBeeswaxException() =  140481071658752)>.fetch_results_error
> {noformat}
> Standard Error
> {noformat}
> SET 
> client_identifier=query_test/test_cancellation.py::TestCancellationParallel::()::test_cancel_select[protocol:beeswax|table_format:kudu/none|exec_option:{'batch_size':0;'num_nodes':0;'disable_codegen_rows_threshold':0;'disable_codegen':False;'abort_on_error':1;'debug_action;
> -- executing against localhost:21000
> use tpch_kudu;
> -- 2019-01-18 17:50:03,100 INFO MainThread: Started query 
> 4e4b3ab4cc7d:11efc3f5
> SET 
> client_identifier=query_test/test_cancellation.py::TestCancellationParallel::()::test_cancel_select[protocol:beeswax|table_format:kudu/none|exec_option:{'batch_size':0;'num_nodes':0;'disable_codegen_rows_threshold':0;'disable_codegen':False;'abort_on_error':1;'debug_action;
> SET batch_size=0;
> SET num_nodes=0;
> SET disable_codegen_rows_threshold=0;
> SET disable_codegen=False;
> SET abort_on_error=1;
> SET cpu_limit_s=10;
> SET debug_action=0:GETNEXT:WAIT|COORD_CANCEL_QUERY_FINSTANCES_RPC:FAIL;
> SET exec_single_node_rows_threshold=0;
> SET buffer_pool_limit=0;
> -- executing async: localhost:21000
> select l_returnflag from lineitem;
> -- 2019-01-18 17:50:03,139 INFO MainThread: Started query 
> fa4ddb9e62a01240:54c86ad
> SET 
> client_identifier=query_test/test_cancellation.py::TestCancellationParallel::()::test_cancel_select[protocol:beeswax|table_format:kudu/none|exec_option:{'batch_size':0;'num_nodes':0;'disable_codegen_rows_threshold':0;'disable_codegen':False;'abort_on_error':1;'debug_action;
> -- connecting to: localhost:21000
> -- fetching results from:  object at 0x6235e90>
> -- getting state for operation: 
> 
> -- canceling operation:  object at 0x6235e90>
> -- 2019-01-18 17:50:08,196 INFO Thread-4: Starting new HTTP connection 
> (1): localhost
> -- closing query for operation handle: 
> 
> {noformat}
> [~asherman] please take a look since it looks like you touched code around 
> this area last.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8279) Revert IMPALA-6658 to avoid ETL performance regression

2019-03-04 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8279:
--

 Summary: Revert IMPALA-6658 to avoid ETL performance regression
 Key: IMPALA-8279
 URL: https://issues.apache.org/jira/browse/IMPALA-8279
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


The fix for IMPALA-6658 seems to cause a measurable regression on 

{quote}
use tpcds;
create TABLE store_sales_unpart stored as parquet as SELECT * FROM 
tpcds.store_sales;
INSERT OVERWRITE TABLE store_sales_unpart SELECT * FROM store_sales;
{quote}

Revert the change to avoid the regression.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8279) Revert IMPALA-6658 to avoid ETL performance regression

2019-03-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8279.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Revert IMPALA-6658 to avoid ETL performance regression
> --
>
> Key: IMPALA-8279
> URL: https://issues.apache.org/jira/browse/IMPALA-8279
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> The fix for IMPALA-6658 seems to cause a measurable regression on 
> {quote}
> use tpcds;
> create TABLE store_sales_unpart stored as parquet as SELECT * FROM 
> tpcds.store_sales;
> INSERT OVERWRITE TABLE store_sales_unpart SELECT * FROM store_sales;
> {quote}
> Revert the change to avoid the regression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8288) Setting EXEC_TIME_LIMIT_S to very high value triggers DCHECK in pretty printer

2019-03-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8288.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Setting EXEC_TIME_LIMIT_S to very high value triggers DCHECK in pretty printer
> --
>
> Key: IMPALA-8288
> URL: https://issues.apache.org/jira/browse/IMPALA-8288
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Tim Armstrong
>Assignee: Andrew Sherman
>Priority: Critical
>  Labels: crash, newbie, ramp-up
> Fix For: Impala 3.3.0
>
>
> You can see from the below backtrace that the value was 2147483647, which 
> overflows to -1000 when multiplied by 1000. In a RELEASE build this would 
> just result in a negative number being printed.
> {noformat}
> (gdb) bt
> #0  0x7fa99b79d428 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x7fa99b79f02a in abort () from /lib/x86_64-linux-gnu/libc.so.6
> #2  0x047fe424 in google::DumpStackTraceAndExit() ()
> #3  0x047f4e7d in google::LogMessage::Fail() ()
> #4  0x047f6722 in google::LogMessage::SendToLog() ()
> #5  0x047f4857 in google::LogMessage::Flush() ()
> #6  0x047f7e1e in google::LogMessageFatal::~LogMessageFatal() ()
> #7  0x01f7cae8 in impala::PrettyPrinter::PrintTimeMs 
> (value=-1000, ss=0x7fa8f28563d0) at be/src/util/pretty-printer.h:271
> #8  0x01f7aee1 in impala::PrettyPrinter::Print 
> (value=2147483647, unit=impala::TUnit::TIME_S, verbose=false) at 
> be/src/util/pretty-printer.h:110
> #9  0x02068bf7 in impala::ImpalaServer::SetQueryInflight 
> (this=0xcf84000, session_state=..., request_state=...) at 
> be/src/service/impala-server.cc:1108
> #10 0x020f119c in impala::ImpalaServer::ExecuteStatement 
> (this=0xcf84000, return_val=..., request=...) at 
> be/src/service/impala-hs2-server.cc:477
> #11 0x02600972 in 
> apache::hive::service::cli::thrift::TCLIServiceProcessor::process_ExecuteStatement
>  (this=0xd0dcbe0, seqid=0, iprot=0x15d10640, oprot=0x15d10440, 
> callContext=0xb596cf80)
> at be/generated-sources/gen-cpp/TCLIService.cpp:5115
> #12 0x025ff070 in 
> apache::hive::service::cli::thrift::TCLIServiceProcessor::dispatchCall 
> (this=0xd0dcbe0, iprot=0x15d10640, oprot=0x15d10440, fname=..., seqid=0, 
> callContext=0xb596cf80)
> at be/generated-sources/gen-cpp/TCLIService.cpp:4926
> #13 0x025c4b35 in 
> impala::ImpalaHiveServer2ServiceProcessor::dispatchCall (this=0xd0dcbe0, 
> iprot=0x15d10640, oprot=0x15d10440, fname=..., seqid=0, 
> callContext=0xb596cf80)
> at be/generated-sources/gen-cpp/ImpalaHiveServer2Service.cpp:505
> #14 0x01a0d29c in apache::thrift::TDispatchProcessor::process 
> (this=0xd0dcbe0, in=..., out=..., connectionContext=0xb596cf80)
> at 
> /opt/Impala-Toolchain/thrift-0.9.3-p5/include/thrift/TDispatchProcessor.h:121
> #15 0x01e5ffb7 in 
> apache::thrift::server::TAcceptQueueServer::Task::run (this=0x80546c40) at 
> be/src/rpc/TAcceptQueueServer.cpp:
> {noformat}
> {noformat}
> F0306 09:40:56.856586 70607 pretty-printer.h:271] Check failed: value >= 
> static_cast(0) (-1000 vs. 0) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8194) TestPauseMonitor.test_jvm_pause_monitor_logs_entries needs to wait longer to see output

2019-03-14 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8194.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> TestPauseMonitor.test_jvm_pause_monitor_logs_entries needs to wait longer to 
> see output
> ---
>
> Key: IMPALA-8194
> URL: https://issues.apache.org/jira/browse/IMPALA-8194
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Critical
>  Labels: broken-build, flaky-test
> Fix For: Impala 3.2.0
>
>
> TestPauseMonitor.test_jvm_pause_monitor_logs_entries complains:
> {quote}
> FAIL 
> custom_cluster/test_pause_monitor.py::TestPauseMonitor::()::test_jvm_pause_monitor_logs_entries
> === FAILURES 
> ===
> _ TestPauseMonitor.test_jvm_pause_monitor_logs_entries 
> _
> custom_cluster/test_pause_monitor.py:38: in 
> test_jvm_pause_monitor_logs_entries
> self.assert_impalad_log_contains('INFO', "Detected pause in JVM or host 
> machine")
> common/custom_cluster_test_suite.py:216: in assert_impalad_log_contains
> self.assert_log_contains("impalad", level, line_regex, expected_count)
> common/custom_cluster_test_suite.py:248: in assert_log_contains
> (expected_count, log_file_path, line_regex, found, line)
> E   AssertionError: Expected 1 lines in file 
> /data0/jenkins/workspace/xxx/repos/Impala/logs/custom_cluster_tests/impalad.impala-xxx.jenkins.log.INFO.20190211-183351.2092
>  matching regex 'Detected pause in JVM or host machine', but found 0 lines. 
> Last line was: 
> E   I0211 18:34:03.563254  2392 thrift-util.cc:113] TAcceptQueueServer client 
> died: write() send(): Broken pipe
> {quote}
> The actual log (archived later) contains:
> {quote}
> I0211 18:34:03.563254  2392 thrift-util.cc:113] TAcceptQueueServer client 
> died: write() send(): Broken pipe
> I0211 18:34:03.565459  2164 JvmPauseMonitor.java:187] Detected pause in JVM 
> or host machine (eg GC): pause of approximately 4550ms
> No GCs detected
> {quote}
> so if test_jvm_pause_monitor_logs_entries had waited for 3ms more it would 
> have seen the line it was looking for (assuming the simplest explanation). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8325) Leading Unicode comments cause Impala Shell failure.

2019-03-19 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8325:
--

 Summary: Leading Unicode comments cause Impala Shell failure.
 Key: IMPALA-8325
 URL: https://issues.apache.org/jira/browse/IMPALA-8325
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.1.0
Reporter: Andrew Sherman
Assignee: Andrew Sherman


This is a regression introduced by IMPALA-2195 Improper handling of comments in 
queries.

Running a query in impala-shell with a leading comment containing Unicode, will 
cause failures:
{quote}
[localhost:21000] default> --한글
 > select a from tab1;
Query: --한글
select a from tab1
Query submitted at: 2019-03-19 16:37:00 (Coordinator: 
http://asherman-desktop:25000)
Unknown Exception : 'ascii' codec can't encode characters in position 2-3: 
ordinal not in range(128)
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8332) Remove Impala Shell warnings part 1

2019-03-21 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8332:
--

 Summary: Remove Impala Shell warnings part 1
 Key: IMPALA-8332
 URL: https://issues.apache.org/jira/browse/IMPALA-8332
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


When connecting to a secure cluster, Impala-shell produces several 
Thrift-related warnings. 
 This has been happening since “IMPALA-5690: Part 2: Upgrade thrift to 
0.9.3-p4”.

[root@nightly61x-2 ~]# impala-shell -i nightly61x-2.xxx.com:25003 -d default -k 
--ssl --ca_cert=/etc/xxxf/CA_STANDARD/truststore.pem
 Starting Impala Shell using Kerberos authentication
 Using service name 'impala'
 SSL is enabled
 
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
 DeprecationWarning: 3th positional argument is deprecated. Use keyward 
argument insteand.
 DeprecationWarning)
 
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
 DeprecationWarning: 4th positional argument is deprecated. Use keyward 
argument insteand.
 DeprecationWarning)
 
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
 DeprecationWarning: 5th positional argument is deprecated. Use keyward 
argument insteand.
 DeprecationWarning)
 
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:216:
 DeprecationWarning: validate is deprecated. Use cert_reqs=ssl.CERT_REQUIRED 
instead
 DeprecationWarning)
 
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:227:
 DeprecationWarning: Use cert_reqs instead
 warnings.warn('Use cert_reqs instead', DeprecationWarning)
 Opened TCP connection tonightly61x-2.xxx.com:25003

The first 4 of these warnings are caused by our usage of the TSSLSocket 
initialiser TSSLSocket.__init__. 
 In Thrift 0.9.3 this initialiser prints warnings if positional parameters are 
used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8333) Remove Impala Shell warnings part 2

2019-03-21 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8333:
--

 Summary: Remove Impala Shell warnings part 2
 Key: IMPALA-8333
 URL: https://issues.apache.org/jira/browse/IMPALA-8333
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


When connecting to a secure cluster, Impala-shell produces  several 
Thrift-related warnings. 
This has been happening since  “IMPALA-5690: Part 2: Upgrade thrift to 
0.9.3-p4”.

[root@nightly61x-2 ~]# impala-shell -i nightly61x-2.xxx.com:25003 -d default -k 
--ssl --ca_cert=/etc/xxx/CA_STANDARD/truststore.pem
Starting Impala Shell using Kerberos authentication
Using service name 'impala'
SSL is enabled
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
 DeprecationWarning: 3th positional argument is deprecated. Use keyward 
argument insteand.
  DeprecationWarning)
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
 DeprecationWarning: 4th positional argument is deprecated. Use keyward 
argument insteand.
  DeprecationWarning)
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
 DeprecationWarning: 5th positional argument is deprecated. Use keyward 
argument insteand.
  DeprecationWarning)
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:216:
 DeprecationWarning: validate is deprecated. Use cert_reqs=ssl.CERT_REQUIRED 
instead
  DeprecationWarning)
/opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:227:
 DeprecationWarning: Use cert_reqs instead
  warnings.warn('Use cert_reqs instead', DeprecationWarning)
Opened TCP connection tonightly61x-2.xxx.com:25003

The first 4 of these warnings are covered by IMPALA-8332
The 5th warning “DeprecationWarning: Use cert_reqs instead warnings.warn('Use 
cert_reqs instead', DeprecationWarning)” 
is caused by a bug in Thrift 0.9.3.
This can be fixed either by patching Thrift in the Impala toolchain, or by 
implementing  IMPALA-7825 Upgrade Thrift version to 0.11.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8325) Leading Unicode comments cause Impala Shell failure.

2019-03-22 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8325.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Leading Unicode comments cause Impala Shell failure.
> 
>
> Key: IMPALA-8325
> URL: https://issues.apache.org/jira/browse/IMPALA-8325
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 3.1.0
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Critical
> Fix For: Impala 3.3.0
>
>
> This is a regression introduced by IMPALA-2195 Improper handling of comments 
> in queries.
> Running a query in impala-shell with a leading comment containing Unicode, 
> will cause failures:
> {quote}
> [localhost:21000] default> --한글
>  > select a from tab1;
> Query: --한글
> select a from tab1
> Query submitted at: 2019-03-19 16:37:00 (Coordinator: 
> http://asherman-desktop:25000)
> Unknown Exception : 'ascii' codec can't encode characters in position 2-3: 
> ordinal not in range(128)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8332) Remove Impala Shell warnings part 1

2019-03-25 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8332.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Remove Impala Shell warnings part 1
> ---
>
> Key: IMPALA-8332
> URL: https://issues.apache.org/jira/browse/IMPALA-8332
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> When connecting to a secure cluster, Impala-shell produces several 
> Thrift-related warnings. 
>  This has been happening since “IMPALA-5690: Part 2: Upgrade thrift to 
> 0.9.3-p4”.
> [root@nightly61x-2 ~]# impala-shell -i nightly61x-2.xxx.com:25003 -d default 
> -k --ssl --ca_cert=/etc/xxxf/CA_STANDARD/truststore.pem
>  Starting Impala Shell using Kerberos authentication
>  Using service name 'impala'
>  SSL is enabled
>  
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
>  DeprecationWarning: 3th positional argument is deprecated. Use keyward 
> argument insteand.
>  DeprecationWarning)
>  
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
>  DeprecationWarning: 4th positional argument is deprecated. Use keyward 
> argument insteand.
>  DeprecationWarning)
>  
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
>  DeprecationWarning: 5th positional argument is deprecated. Use keyward 
> argument insteand.
>  DeprecationWarning)
>  
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:216:
>  DeprecationWarning: validate is deprecated. Use cert_reqs=ssl.CERT_REQUIRED 
> instead
>  DeprecationWarning)
>  
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:227:
>  DeprecationWarning: Use cert_reqs instead
>  warnings.warn('Use cert_reqs instead', DeprecationWarning)
>  Opened TCP connection tonightly61x-2.xxx.com:25003
> The first 4 of these warnings are caused by our usage of the TSSLSocket 
> initialiser TSSLSocket.__init__. 
>  In Thrift 0.9.3 this initialiser prints warnings if positional parameters 
> are used.
> The 5th warning is covered by IMPALA-8333 Remove Impala Shell warnings part 2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8446) Create a unit test for Admission Controller

2019-04-22 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8446:
--

 Summary: Create a unit test for Admission Controller
 Key: IMPALA-8446
 URL: https://issues.apache.org/jira/browse/IMPALA-8446
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


This will allow construction of white box tests that exercise Admission 
Controller code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8446) Create a unit test for Admission Controller

2019-04-26 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8446.

   Resolution: Fixed
Fix Version/s: Impala 2.13.0

> Create a unit test for Admission Controller
> ---
>
> Key: IMPALA-8446
> URL: https://issues.apache.org/jira/browse/IMPALA-8446
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: Impala 2.13.0
>
>
> This will allow construction of white box tests that exercise Admission 
> Controller code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8536) Add Scalable Pool Configuration to Admission Controller.

2019-05-10 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8536:
--

 Summary: Add Scalable Pool Configuration to Admission Controller.
 Key: IMPALA-8536
 URL: https://issues.apache.org/jira/browse/IMPALA-8536
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.2.0
Reporter: Andrew Sherman
Assignee: Andrew Sherman


Add configuration parameters to Admission Controller that scale with
the number of hosts in the resource pool. The planned parameters are Max 
Running Queries * * Multiple - the Multiple of the current total number of 
running Impalads which gives the maximum number of concurrently running queries 
in this pool. 
* Max Queued Queries Multiple - the Multiple of the current total number of 
running Impalads which gives the maximum number of queries that can be queued 
in this pool. 
* Max Memory Multiple - the Multiple of the current total number of running 
Impalads which gives the maximum memory available across the cluster for the 
pool.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8333) Remove Impala Shell warnings part 2

2019-05-31 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8333.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Remove Impala Shell warnings part 2
> ---
>
> Key: IMPALA-8333
> URL: https://issues.apache.org/jira/browse/IMPALA-8333
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Critical
> Fix For: Impala 3.3.0
>
>
> When connecting to a secure cluster, Impala-shell produces  several 
> Thrift-related warnings. 
> This has been happening since  “IMPALA-5690: Part 2: Upgrade thrift to 
> 0.9.3-p4”.
> [root@nightly61x-2 ~]# impala-shell -i nightly61x-2.xxx.com:25003 -d default 
> -k --ssl --ca_cert=/etc/xxx/CA_STANDARD/truststore.pem
> Starting Impala Shell using Kerberos authentication
> Using service name 'impala'
> SSL is enabled
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
>  DeprecationWarning: 3th positional argument is deprecated. Use keyward 
> argument insteand.
>   DeprecationWarning)
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
>  DeprecationWarning: 4th positional argument is deprecated. Use keyward 
> argument insteand.
>   DeprecationWarning)
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:80:
>  DeprecationWarning: 5th positional argument is deprecated. Use keyward 
> argument insteand.
>   DeprecationWarning)
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:216:
>  DeprecationWarning: validate is deprecated. Use cert_reqs=ssl.CERT_REQUIRED 
> instead
>   DeprecationWarning)
> /opt/cloudera/parcels/CDH-6.1.x-1.cdh6.1.x.p0.972267/lib/impala-shell/lib/thrift/transport/TSSLSocket.py:227:
>  DeprecationWarning: Use cert_reqs instead
>   warnings.warn('Use cert_reqs instead', DeprecationWarning)
> Opened TCP connection tonightly61x-2.xxx.com:25003
> The first 4 of these warnings are covered by IMPALA-8332
> The 5th warning “DeprecationWarning: Use cert_reqs instead warnings.warn('Use 
> cert_reqs instead', DeprecationWarning)” 
> is caused by a bug in Thrift 0.9.3.
> This can be fixed either by patching Thrift in the Impala toolchain, or by 
> implementing  "IMPALA-7825 Upgrade Thrift version to 0.11.0" where this 
> warning is not present



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8609) Scalable Pool Configuration needs to consider executors that are shutting down.

2019-05-31 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8609:
--

 Summary: Scalable Pool Configuration needs to consider executors 
that are shutting down.
 Key: IMPALA-8609
 URL: https://issues.apache.org/jira/browse/IMPALA-8609
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


IMPALA-8536 Adds Scalable Pool Configurations to Admission Controller. This 
allows to Admission Controller to scale with the cluster size. In the initial 
implementation of IMPALA-8536 the cluster size is obtained by counting 
impala_server->GetKnownBackends(). 
This cluster size does not reflect the fact that some executors could be in 
process of shutdown. 
Admission Controller should ignore executors in process of shutdown.
If IMPALA-8460 (Simplify cluster membership management) is implemented, then 
this functionality should probably be part of ClusterMembershipMgr. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8143) Add features to DoRpcWithRetry()

2019-06-13 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8143.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Add features to DoRpcWithRetry()
> 
>
> Key: IMPALA-8143
> URL: https://issues.apache.org/jira/browse/IMPALA-8143
> Project: IMPALA
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> DoRpcWithRetry() is a templated utility function that is currently in 
> control-service.h that is used to retry synchronous Krpc calls. It makes a 
> call to a Krpc function that is is passed as a lambda function. It sets the 
> krpc timeout to the ‘krpc_timeout‘ parameter and calls the Krpc function a 
> number of times controlled by the ‘times_to_try’ parameter.
> Possible improvements:
>  * Move code to rpc-mgr.inline.h
>  * Add a configurable sleep if RpcMgr::IsServerTooBusy() says the remote 
> server’s queue is full.
>  * Do we want exponential backoff? How can we test this?
>  * how long should the sleep be for existing clients?
>  * Make QueryState::ReportExecStatus() use DoRpcWithRetry()
>  * Consider if asynchronous code like that in KrpcDataStreamSender::Channel  
> can also use DoRpcWithRetry()
>  * Replace FAULT_INJECTION_RPC_DELAY with DebugAction 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8609) Scalable Pool Configuration needs to consider executors that are shutting down.

2019-06-13 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8609.

Resolution: Won't Fix

Actually we should not ignore quiescing executors

> Scalable Pool Configuration needs to consider executors that are shutting 
> down.
> ---
>
> Key: IMPALA-8609
> URL: https://issues.apache.org/jira/browse/IMPALA-8609
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Critical
>
> IMPALA-8536 Adds Scalable Pool Configurations to Admission Controller. This 
> allows to Admission Controller to scale with the cluster size. In the initial 
> implementation of IMPALA-8536 the cluster size is obtained by counting 
> impala_server->GetKnownBackends(). 
> This cluster size does not reflect the fact that some executors could be in 
> process of shutdown. 
> Admission Controller should ignore executors in process of shutdown.
> If IMPALA-8460 (Simplify cluster membership management) is implemented, then 
> this functionality should probably be part of ClusterMembershipMgr. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8672) verification of read data failed with OpenSSL error in EVP_DecryptFinal decrypting while reading from scratch file

2019-06-17 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8672:
--

 Summary: verification of read data failed with OpenSSL error in 
EVP_DecryptFinal decrypting while reading from scratch file
 Key: IMPALA-8672
 URL: https://issues.apache.org/jira/browse/IMPALA-8672
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Andrew Sherman


This is a failure that is like IMPALA-7983 and IMPALA-7126.

query_test.test_aggregation.TestAggregationQueries.test_aggregation failed 
{code}
EQuery aborted:Error reading 2097152 bytes from scratch file 
'/tmp/impala-scratch/4e4e74f427936e7f:5b8c8f00_87092364-1e37-4661-ace3-67fb0c329078'
 on backend shared-centos64-ec2-m2-4xlarge-spot-00c9.vpc.cloudera.com:22001 at 
offset 94371840: verification of read data failed.
E   OpenSSL error in EVP_DecryptFinal decrypting:
{code}

In the log this looks like

{code}
I0616 23:23:09.886590 13110 status.cc:124] 4e4e74f427936e7f:5b8c8f05] 
OpenSSL error in EVP_DecryptFinal decrypting: 
@  0x1afe0cc  impala::Status::Status()
@  0x22d7b3b  impala::OpenSSLErr()
@  0x22d8693  impala::EncryptionKey::EncryptInternal()
@  0x22d7e15  impala::EncryptionKey::Decrypt()
@  0x1fce519  impala::TmpFileMgr::WriteHandle::CheckHashAndDecrypt()
@  0x1fcc464  impala::TmpFileMgr::FileGroup::RestoreData()
@  0x234f7a2  impala::BufferPool::Client::StartMoveToPinned()
@  0x234bd5b  impala::BufferPool::Pin()
@  0x2844f06  impala::BufferedTupleStream::PinPage()
@  0x2845202  impala::BufferedTupleStream::PinPageIfNeeded()
@  0x2846ef9  impala::BufferedTupleStream::NextReadPage()
@  0x284d1a8  impala::BufferedTupleStream::GetNextInternal<>()
@  0x284ad0e  impala::BufferedTupleStream::GetNextInternal<>()
@  0x2848427  impala::BufferedTupleStream::GetNext()
@  0x2536175  impala::GroupingAggregator::ProcessStream<>()
@  0x252f5dd  impala::GroupingAggregator::BuildSpilledPartition()
@  0x252eb16  impala::GroupingAggregator::NextPartition()
@  0x252a526  impala::GroupingAggregator::GetRowsFromPartition()
@  0x252a0c4  impala::GroupingAggregator::GetNext()
@  0x24fa5ea  impala::AggregationNode::GetNext()
@  0x24f9962  impala::AggregationNode::Open()
@  0x1ff530b  impala::FragmentInstanceState::Open()
@  0x1ff1fcf  impala::FragmentInstanceState::Exec()
@  0x20059b5  impala::QueryState::ExecFInstance()
@  0x2003c87  _ZZN6impala10QueryState15StartFInstancesEvENKUlvE_clEv
@  0x2007696  
_ZN5boost6detail8function26void_function_obj_invoker0IZN6impala10QueryState15StartFInstancesEvEUlvE_vE6invokeERNS1_15function_bufferE
@  0x1e18bdd  boost::function0<>::operator()()
@  0x23230dc  impala::Thread::SuperviseThread()
@  0x232b460  boost::_bi::list5<>::operator()<>()
@  0x232b384  boost::_bi::bind_t<>::operator()()
@  0x232b347  boost::detail::thread_data<>::run()
@  0x39eb159  thread_proxy
{code}

System logs: dmesg and messages have nothing interesting that I can see.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8676) run-all-tests-timeout-check.sh times out after 20 hours

2019-06-18 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8676:
--

 Summary:  run-all-tests-timeout-check.sh times out after 20 hours
 Key: IMPALA-8676
 URL: https://issues.apache.org/jira/browse/IMPALA-8676
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


An exhaustive test run on Centos6 now takes 22 hours.
run-all-tests-timeout-check.sh will time out after 20 hours.

{code}
15:39:30  Timout Timer Started (pid 20144, ppid 20063) for 72000 s! 
{code}

{code}
11:39:30  Tests TIMED OUT! 
{code}

Some tests take a long time, we could try to fix that.
Or we could bump ${TIMEOUT_FOR_RUN_ALL_TESTS_MINS} which is currently 1200



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8678) ToSqlTest falures on S3 (from org.apache.impala.customservice.KuduHMSIntegrationTest

2019-06-18 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8678:
--

 Summary: ToSqlTest falures on S3 (from 
org.apache.impala.customservice.KuduHMSIntegrationTest
 Key: IMPALA-8678
 URL: https://issues.apache.org/jira/browse/IMPALA-8678
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


On S3 tests we get failures in 
org.apache.impala.analysis.ToSqlTest.TestCreateTableLikeFile (from 
org.apache.impala.customservice.KuduHMSIntegrationTest)
org.apache.impala.analysis.ToSqlTest.alterTableAddPartitionTest (from 
org.apache.impala.customservice.KuduHMSIntegrationTest)

{code}
org.junit.ComparisonFailure: expected:<... month=2) LOCATION 
'[hdfs://localhost:20500]/y2050m2'> but was:<... month=2) LOCATION 
'[s3a://impala-test-uswest2-2]/y2050m2'>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at org.apache.impala.analysis.ToSqlTest.testToSql(ToSqlTest.java:123)
at org.apache.impala.analysis.ToSqlTest.testToSql(ToSqlTest.java:105)
at org.apache.impala.analysis.ToSqlTest.testToSql(ToSqlTest.java:95)
at org.apache.impala.analysis.ToSqlTest.testToSql(ToSqlTest.java:82)
at 
org.apache.impala.analysis.ToSqlTest.alterTableAddPartitionTest(ToSqlTest.java:1269)
{code}

{code}
org.junit.ComparisonFailure: expected:<...ult.p LIKE PARQUET 
'[hdfs://localhost:20500]/test-warehouse/sche...> but was:<...ult.p LIKE 
PARQUET '[s3a://impala-test-uswest2-2]/test-warehouse/sche...>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at org.apache.impala.analysis.ToSqlTest.testToSql(ToSqlTest.java:123)
at org.apache.impala.analysis.ToSqlTest.testToSql(ToSqlTest.java:105)
at 
org.apache.impala.analysis.ToSqlTest.TestCreateTableLikeFile(ToSqlTest.java:386)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8536) Add Scalable Pool Configuration to Admission Controller.

2019-06-19 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8536.

Resolution: Fixed

> Add Scalable Pool Configuration to Admission Controller.
> 
>
> Key: IMPALA-8536
> URL: https://issues.apache.org/jira/browse/IMPALA-8536
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>  Labels: admission-control, resource-management, scalability
>
> Add configuration parameters to Admission Controller that scale with
> the number of hosts in the resource pool. The planned parameters are
> * Max Running Queries Multiple - the Multiple of the current total number of 
> running Impalads which gives the maximum number of concurrently running 
> queries in this pool. 
> * Max Queued Queries Multiple - the Multiple of the current total number of 
> running Impalads which gives the maximum number of queries that can be queued 
> in this pool. 
> * Max Memory Multiple - the Multiple of the current total number of running 
> Impalads which gives the maximum memory available across the cluster for the 
> pool.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8684) test_mem_limit in test_admission_controller timed out waiting for query to end

2019-06-19 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8684:
--

 Summary: test_mem_limit in test_admission_controller timed out 
waiting for query to end
 Key: IMPALA-8684
 URL: https://issues.apache.org/jira/browse/IMPALA-8684
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


{code}
custom_cluster/test_admission_controller.py:1504: in test_mem_limit
{'request_pool': self.pool_name, 'mem_limit': query_mem_limit})
custom_cluster/test_admission_controller.py:1403: in run_admission_test
['admitted', 'timed-out'], curr_metrics, expected_admitted)
custom_cluster/test_admission_controller.py:1100: in wait_for_metric_changes
assert (time() - start_time < STRESS_TIMEOUT),\
E   AssertionError: Timed out waiting 90 seconds for metrics admitted,timed-out 
delta 5 current {'dequeued': 5, 'rejected': 14, 'released': 5, 'admitted': 10, 
'queued': 11, 'timed-out': 0} initial {'dequeued': 5, 'rejected': 14, 
'released': 0, 'admitted': 10, 'queued': 11, 'timed-out': 0}
E   assert (1560929088.700526 - 1560928997.952595) < 90
E+  where 1560929088.700526 = time()
{code}

IMPALA-8295 previously raised the timeout from 60 seconds to 90 seconds to fix 
a similar bug.

This bug happened in a build that did not contain "IMPALA-8536: Add Scalable 
Pool Configuration to Admission Controller"





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8689) test_hive_impala_interop failing with "Timeout >7200s"

2019-06-20 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8689:
--

 Summary: test_hive_impala_interop failing with "Timeout >7200s"
 Key: IMPALA-8689
 URL: https://issues.apache.org/jira/browse/IMPALA-8689
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.3.0
Reporter: Andrew Sherman
Assignee: Abhishek Rawat


I think this is the new test added in IMPALA-8617

{code}
custom_cluster/test_hive_parquet_codec_interop.py:78: in 
test_hive_impala_interop
.format(codec, hive_table, impala_table))
common/impala_test_suite.py:871: in run_stmt_in_hive
(stdout, stderr) = call.communicate()
/usr/lib64/python2.7/subprocess.py:800: in communicate
return self._communicate(input)
/usr/lib64/python2.7/subprocess.py:1401: in _communicate
stdout, stderr = self._communicate_with_poll(input)
/usr/lib64/python2.7/subprocess.py:1455: in _communicate_with_poll
ready = poller.poll()
E   Failed: Timeout >7200s
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8858) Add metrics to improve observability of idle executor groups

2019-08-12 Thread Andrew Sherman (JIRA)
Andrew Sherman created IMPALA-8858:
--

 Summary:  Add metrics to improve observability of idle executor 
groups
 Key: IMPALA-8858
 URL: https://issues.apache.org/jira/browse/IMPALA-8858
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Bikramjeet Vig


The metrics provided in [IMPALA-8806] are useful but there is not a way to see 
how many executor groups are idle. 
Please add a metric which measures the number of executor groups that are both 
healthy and idle.
It is OK to for this to be 0 for a short time for example when an executor 
group is created.
This should be part of the new "cluster-membership" metric group.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8941) Add developer-only query option like MEM_LIMIT that only affects executors

2019-09-11 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-8941:
--

 Summary: Add developer-only query option like MEM_LIMIT that only 
affects executors
 Key: IMPALA-8941
 URL: https://issues.apache.org/jira/browse/IMPALA-8941
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


The MEM_LIMIT query option defines the maximum amount of memory a query can 
allocate on each node.  For testing it would be useful to have a developer-only 
query option that defines the maximum amount of memory a query can allocate on 
each executor node. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (IMPALA-9075) Add support for reading zstd text files

2019-10-21 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9075:
--

 Summary: Add support for reading zstd text files
 Key: IMPALA-9075
 URL: https://issues.apache.org/jira/browse/IMPALA-9075
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.3.0
Reporter: Andrew Sherman
Assignee: Andrew Sherman


IMPALA-8450 added support for zstd in parquet.
We should also support support for reading  zstd encoded text files.

Another useful jira to look at is IMPALA-8549 (Add support for scanning DEFLATE 
text files)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9098) TestQueries.test_union fails

2019-10-28 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9098:
--

 Summary: TestQueries.test_union fails
 Key: IMPALA-9098
 URL: https://issues.apache.org/jira/browse/IMPALA-9098
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


This *might* be a flaky test, like IMPALA-8959 or it just might be a regression 
caused by IMPALA-8999

{code}
Error Message
query_test/test_queries.py:77: in test_union 
self.run_test_case('QueryTest/union', vector) common/impala_test_suite.py:650: 
in run_test_case self.__verify_results_and_errors(vector, test_section, 
result, use_db) common/impala_test_suite.py:487: in __verify_results_and_errors 
replace_filenames_with_placeholder) common/test_result_verifier.py:456: in 
verify_raw_results VERIFIER_MAP[verifier](expected, actual) 
common/test_result_verifier.py:278: in verify_query_result_is_equal assert 
expected_results == actual_results E   assert Comparing QueryTestResults 
(expected vs actual): E 0,0 != 1,10 E 0,0 != 2,0 E 0,0 != 224,80 E  
   1,10 != 252,90 E 1,10 != 3,10 E 1000,2000 != 4,0 E 112,40 != 4,0 
E 140,50 != 5,10 E 168,60 != 6,0 E 196,70 != 7,10 E 2,0 != None 
E 2,0 != None E 224,80 != None E 252,90 != None E 28,10 != None 
E 3,10 != None E 3,10 != None E 4,0 != None E 4,0 != None E 
5,10 != None E 5,10 != None E 56,20 != None E 6,0 != None E 6,0 
!= None E 7,10 != None E 7,10 != None E 84,30 != None E Number 
of rows returned (expected vs actual): 27 != 10
Stacktrace
query_test/test_queries.py:77: in test_union
self.run_test_case('QueryTest/union', vector)
common/impala_test_suite.py:650: in run_test_case
self.__verify_results_and_errors(vector, test_section, result, use_db)
common/impala_test_suite.py:487: in __verify_results_and_errors
replace_filenames_with_placeholder)
common/test_result_verifier.py:456: in verify_raw_results
VERIFIER_MAP[verifier](expected, actual)
common/test_result_verifier.py:278: in verify_query_result_is_equal
assert expected_results == actual_results
E   assert Comparing QueryTestResults (expected vs actual):
E 0,0 != 1,10
E 0,0 != 2,0
E 0,0 != 224,80
E 1,10 != 252,90
E 1,10 != 3,10
E 1000,2000 != 4,0
E 112,40 != 4,0
E 140,50 != 5,10
E 168,60 != 6,0
E 196,70 != 7,10
E 2,0 != None
E 2,0 != None
E 224,80 != None
E 252,90 != None
E 28,10 != None
E 3,10 != None
E 3,10 != None
E 4,0 != None
E 4,0 != None
E 5,10 != None
E 5,10 != None
E 56,20 != None
E 6,0 != None
E 6,0 != None
E 7,10 != None
E 7,10 != None
E 84,30 != None
E Number of rows returned (expected vs actual): 27 != 10
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9106) Some TPC-DS queries fail on s3 in TestTpcdsQuery and TestTpcdsDecimalV2Query

2019-10-30 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9106:
--

 Summary: Some TPC-DS queries fail on s3 in TestTpcdsQuery and 
TestTpcdsDecimalV2Query
 Key: IMPALA-9106
 URL: https://issues.apache.org/jira/browse/IMPALA-9106
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.3.0
Reporter: Andrew Sherman


One example failure is
{code}
Error Message
query_test/test_tpcds_queries.py:499: in test_tpcds_q97 
self.run_test_case(self.get_workload() + '-decimal_v2-q97', vector) 
common/impala_test_suite.py:650: in run_test_case 
self.__verify_results_and_errors(vector, test_section, result, use_db) 
common/impala_test_suite.py:487: in __verify_results_and_errors 
replace_filenames_with_placeholder) common/test_result_verifier.py:456: in 
verify_raw_results VERIFIER_MAP[verifier](expected, actual) 
common/test_result_verifier.py:278: in verify_query_result_is_equal assert 
expected_results == actual_results E   assert Comparing QueryTestResults 
(expected vs actual): E 540401,286628,174 != 539532,286629,173
Stacktrace
query_test/test_tpcds_queries.py:499: in test_tpcds_q97
self.run_test_case(self.get_workload() + '-decimal_v2-q97', vector)
common/impala_test_suite.py:650: in run_test_case
self.__verify_results_and_errors(vector, test_section, result, use_db)
common/impala_test_suite.py:487: in __verify_results_and_errors
replace_filenames_with_placeholder)
common/test_result_verifier.py:456: in verify_raw_results
VERIFIER_MAP[verifier](expected, actual)
common/test_result_verifier.py:278: in verify_query_result_is_equal
assert expected_results == actual_results
E   assert Comparing QueryTestResults (expected vs actual):
E 540401,286628,174 != 539532,286629,173
{code}

The list of failures is

{code}
query_test.test_tpcds_queries.TestTpcdsQuery.test_tpcds_q67a[protocol: beeswax 
| exec_option: {'decimal_v2': 0, 'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsDecimalV2Query.test_tpcds_q97[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsDecimalV2Query.test_tpcds_q67a[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsInsert.test_tpcds_partitioned_insert[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsQuery.test_tpcds_q70a[protocol: beeswax 
| exec_option: {'decimal_v2': 0, 'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsDecimalV2Query.test_tpcds_q43[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsDecimalV2Query.test_tpcds_q70a[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsDecimalV2Query.test_tpcds_q51[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsDecimalV2Query.test_tpcds_q51a[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsDecimalV2Query.test_tpcds_q53[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
query_test.test_tpcds_queries.TestTpcdsInsert.test_expr_insert[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 5000, 'di

[jira] [Created] (IMPALA-9240) Impala shell using hs2-http reports all http error codes as EOFError

2019-12-12 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9240:
--

 Summary: Impala shell using hs2-http reports all http error codes 
as EOFError
 Key: IMPALA-9240
 URL: https://issues.apache.org/jira/browse/IMPALA-9240
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


For example if I try to connect to an http endpoint that returns 503 I see
{code}
$ impala-shell -V --protocol='hs2-http' -i "localhost:28000"
Starting Impala Shell without Kerberos authentication
Warning: --connect_timeout_ms is currently ignored with HTTP transport.
Opened TCP connection to localhost:28000
Error connecting: EOFError, 
{code}
At present Impala shell does not properly check http return calls. 
When this fix is complete it should be also put into Impyla.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9540) Impala Shell sends duplicate "Host" headers in http mode

2020-03-20 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9540:
--

 Summary: Impala Shell sends duplicate "Host" headers in http mode
 Key: IMPALA-9540
 URL: https://issues.apache.org/jira/browse/IMPALA-9540
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


Running "impala-shell --protocol='hs2-http' ..." the impala shell sends two 
copies of the "Host" header. When connecting to the minicluster the values are
{code}
Host: “localhost”
Host: “localhost:8080”
{code}
This is bad because the http server in golang's net/http package will reject 
any http call with 
{code}
badRequestError("too many Host headers")
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9665) Database not found errors in query_test.test_insert (TestInsertQueries)

2020-04-16 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9665:
--

 Summary: Database not found errors in query_test.test_insert 
(TestInsertQueries)
 Key: IMPALA-9665
 URL: https://issues.apache.org/jira/browse/IMPALA-9665
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


The query_test/test_insert.py test uses the unique_database fixture to create a 
database. 
The tests fail when they try to use the database

Example failure:

{code}
. 
query_test/test_insert.py::TestInsertQueries::()::test_insert_large_string[compression_codec:
 none | protocol: beeswax | exec_option: {'sync_ddl': 1, 'batch_size': 1, 
'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
. 
query_test/test_insert.py::TestInsertQueries::()::test_insert[compression_codec:
 none | protocol: beeswax | exec_option: {'sync_ddl': 0, 'batch_size': 0, 
'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
text/none]
F 
query_test/test_insert.py::TestInsertQueries::()::test_insert[compression_codec:
 snappy | protocol: beeswax | exec_option: {'sync_ddl': 1, 'batch_size': 0, 
'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]
 query_test/test_insert.py:136: in test_insert
 multiple_impalad=vector.get_value('exec_option')['sync_ddl'] == 1)
 common/impala_test_suite.py:567: in run_test_case
 table_format_info, use_db, pytest.config.option.scale_factor)
 common/impala_test_suite.py:778: in change_database
 impala_client.execute(query)
 common/impala_connection.py:205: in execute
 return self.__beeswax_client.execute(sql_stmt, user=user)
 beeswax/impala_beeswax.py:187: in execute
 handle = self.__execute_query(query_string.strip(), user=user)
 beeswax/impala_beeswax.py:363: in __execute_query
 handle = self.execute_query_async(query_string, user=user)
 beeswax/impala_beeswax.py:357: in execute_query_async
 handle = self.__do_rpc(lambda: self.imp_service.query(query,))
 beeswax/impala_beeswax.py:520: in __do_rpc
 raise ImpalaBeeswaxException(self.__build_error_message(b), b)
 E   ImpalaBeeswaxException: ImpalaBeeswaxException:
 EINNER EXCEPTION: 
 EMESSAGE: AnalysisException: Database does not exist: test_insert_ecc90474
 {code}

The server logs seem to show that the sequence is

{code}
DROP DATABASE IF EXISTS test_insert_ecc90474
CREATE DATABASE test_insert_ecc90474
USE test_insert_ecc90474
DROP DATABASE test_insert_ecc90474
USE test_insert_ecc90474
{code}

The server logs show:

{code}
I0416 07:46:52.364746 15419 impala-beeswax-server.cc:54] query(): query=DROP 
DATABASE IF EXISTS `test_insert_ecc90474` CASCADE
  01: query (string) = "DROP DATABASE IF EXISTS `test_insert_ecc90474` CASCADE",
I0416 07:46:52.368064 15419 Frontend.java:1490] 
2348c561d46371c5:c8a2c29b] Analyzing query: DROP DATABASE IF EXISTS 
`test_insert_ecc90474` CASCADE db: default
I0416 07:46:57.091251 15419 impala-beeswax-server.cc:54] query(): query=CREATE 
DATABASE `test_insert_ecc90474`
  01: query (string) = "CREATE DATABASE `test_insert_ecc90474`",
I0416 07:46:57.095499 15419 Frontend.java:1490] 
84415a0cf8d9f7c9:878434bc] Analyzing query: CREATE DATABASE 
`test_insert_ecc90474` db: default
I0416 07:46:57.105612 15419 ImpaladCatalog.java:209] 
84415a0cf8d9f7c9:878434bc] Adding: DATABASE:test_insert_ecc90474 
version: 44117 size: 193
I0416 07:46:57.111361 25769 impala-beeswax-server.cc:54] query(): query=use 
test_insert_ecc90474
  01: query (string) = "use test_insert_ecc90474",
I0416 07:46:57.143752 25769 Frontend.java:1490] 
df4f5c820a08f50f:71a027de] Analyzing query: use test_insert_ecc90474 
db: default
I0416 07:46:57.485566 15419 impala-beeswax-server.cc:54] query(): query=DROP 
DATABASE `test_insert_ecc90474` CASCADE
  01: query (string) = "DROP DATABASE `test_insert_ecc90474` CASCADE",
I0416 07:46:57.490993 15419 Frontend.java:1490] 
8f4c03dd31cc7d2b:03bf91cf] Analyzing query: DROP DATABASE 
`test_insert_ecc90474` CASCADE db: default
I0416 07:46:57.503159 15419 ImpaladCatalog.java:209] 
8f4c03dd31cc7d2b:03bf91cf] Deleting: DATABASE:test_insert_ecc90474 
version: 44118 size: 193
I0416 07:47:01.090381 19101 ImpaladCatalog.java:209] Adding: 
DATABASE:test_insert_ecc90474 version: 44117 size: 186
I0416 07:47:03.091727 19101 ImpaladCatalog.java:209] Deleting: 
DATABASE:test_insert_ecc90474 version: 44118 size: 186

[second log file]

I0416 07:46:57.148188 25771 impala-beeswax-server.cc:54] query(): query=use 
test_insert_ecc90474
  01: query (string) = "use test_insert_ecc90474",
I0416 07:46:57.166188 25771 Frontend.java:1490] 
f8417aebe9247626:834c219f] An

[jira] [Created] (IMPALA-9666) TestImpalaShellInteractive fails in a misleading way

2020-04-16 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9666:
--

 Summary: TestImpalaShellInteractive fails in a misleading way
 Key: IMPALA-9666
 URL: https://issues.apache.org/jira/browse/IMPALA-9666
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


If the test fails in _wait_for_num_open_sessions then the failure says
{code}
E   TypeError: not all arguments converted during string formatting
{code}
which is the result of 
{code}
  LOG.exception("Error: " % err)
{code}
which should be
{code}
  LOG.exception("Error: %s" % err)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9667) TestImpalaShellInteractive failing as session not correctly closed

2020-04-16 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9667:
--

 Summary: TestImpalaShellInteractive failing as session not 
correctly closed
 Key: IMPALA-9667
 URL: https://issues.apache.org/jira/browse/IMPALA-9667
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


TestImpalaShellInteractive.test_reconnect and 
TestImpalaShellInteractive.test_ddl_queries_are_closed fail waiting for metric 
value 'impala-server.num-open-beeswax-sessions' to be 0.

The tests actually fail with 
{code}
E   TypeError: not all arguments converted during string formatting
{code}
which is the result of [IMPALA-9666]






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9672) TestInsertQueries.test_acid_nonacid_insert fails reading unexpected number of rows

2020-04-17 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9672:
--

 Summary: TestInsertQueries.test_acid_nonacid_insert fails reading 
unexpected number of rows
 Key: IMPALA-9672
 URL: https://issues.apache.org/jira/browse/IMPALA-9672
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


{code}
query_test/test_insert.py:153: in test_acid_nonacid_insert
multiple_impalad=vector.get_value('exec_option')['sync_ddl'] == 1)
common/impala_test_suite.py:687: in run_test_case
self.__verify_results_and_errors(vector, test_section, result, use_db)
common/impala_test_suite.py:523: in __verify_results_and_errors
replace_filenames_with_placeholder)
common/test_result_verifier.py:456: in verify_raw_results
VERIFIER_MAP[verifier](expected, actual)
common/test_result_verifier.py:278: in verify_query_result_is_equal
assert expected_results == actual_results
E   assert Comparing QueryTestResults (expected vs actual):
E 1 != None
E 2 != None
E 3 != None
E 4 != None
E Number of rows returned (expected vs actual): 4 != 0
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9673) Tests expecting results to be in test-warehouse/managed but find test-warehouse

2020-04-17 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9673:
--

 Summary: Tests expecting results to be in test-warehouse/managed 
but find  test-warehouse
 Key: IMPALA-9673
 URL: https://issues.apache.org/jira/browse/IMPALA-9673
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


TestDdlStatements.test_create_database
{code}
ERROR:test_configuration:Comparing QueryTestResults (expected vs actual):
'test_create_database_b10825a1_2','hdfs://localhost:20500/test-warehouse/managed/test_create_database_b10825a1_2.db','For
 testing' != 
'test_create_database_b10825a1_2','hdfs://localhost:20500/test-warehouse/test_create_database_b10825a1_2.db','For
 testing'
{code}

TestMetadataQueryStatements.test_describe_db
{code}
ERROR:test_configuration:Comparing QueryTestResults (expected vs actual):
'default','hdfs://localhost:20500/test-warehouse/managed','Default Hive 
database' != 'default','hdfs://localhost:20500/test-warehouse','Default Hive 
database'
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9674) TestFailingAcidInserts.test_failing_inserts fails with Error making 'truncateTable' RPC to Hive Metastore: CAUSED BY: TransactionException: Failed to acquire lock for

2020-04-17 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9674:
--

 Summary: TestFailingAcidInserts.test_failing_inserts fails with  
Error making 'truncateTable' RPC to Hive Metastore:  CAUSED BY: 
TransactionException: Failed to acquire lock for transaction 2011
 Key: IMPALA-9674
 URL: https://issues.apache.org/jira/browse/IMPALA-9674
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


{code}
Traceback (most recent call last):
  File 
"/data/jenkins/workspace/impala-cdpd-master-core/repos/Impala/tests/stress/test_acid_stress.py",
 line 43, in run
return self.func(*self.args, **self.kwargs)
  File 
"/data/jenkins/workspace/impala-cdpd-master-core/repos/Impala/tests/stress/test_acid_stress.py",
 line 309, in _impala_role_insert
raise e
ImpalaBeeswaxException: ImpalaBeeswaxException:
 INNER EXCEPTION: 
 MESSAGE: CatalogException: Failed to truncate table: 
test_failing_inserts_b980fc6.test_inserts_fail.
Table may be in a partially truncated state.
CAUSED BY: ImpalaRuntimeException: Error making 'truncateTable' RPC to Hive 
Metastore: 
CAUSED BY: TransactionException: Failed to acquire lock for transaction 2011
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9675) TestCustomHiveConfigs.test_ctas_read_write_consistence fails as results contain extra characters

2020-04-17 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9675:
--

 Summary: TestCustomHiveConfigs.test_ctas_read_write_consistence 
fails as results contain extra characters
 Key: IMPALA-9675
 URL: https://issues.apache.org/jira/browse/IMPALA-9675
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


This test was re-enabled in IMPALA-9071, but it fails with 
{code}
custom_cluster/test_custom_hive_configs.py:64: in 
test_ctas_read_write_consistence
'create table %s '
custom_cluster/test_custom_hive_configs.py:78: in __check_query_results
assert expected_results == res.get_data()
E   assert '1\t2\tname' == ''
E - 1\t2\tname
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-9675) TestCustomHiveConfigs.test_ctas_read_write_consistence fails as results contain extra characters

2020-04-17 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-9675.

Resolution: Duplicate

> TestCustomHiveConfigs.test_ctas_read_write_consistence fails as results 
> contain extra characters
> 
>
> Key: IMPALA-9675
> URL: https://issues.apache.org/jira/browse/IMPALA-9675
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Critical
>
> This test was re-enabled in IMPALA-9071, but it fails with 
> {code}
> custom_cluster/test_custom_hive_configs.py:64: in 
> test_ctas_read_write_consistence
> 'create table %s '
> custom_cluster/test_custom_hive_configs.py:78: in __check_query_results
> assert expected_results == res.get_data()
> E   assert '1\t2\tname' == ''
> E - 1\t2\tname
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9680) Compressed inserts failing

2020-04-20 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9680:
--

 Summary: Compressed inserts failing
 Key: IMPALA-9680
 URL: https://issues.apache.org/jira/browse/IMPALA-9680
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


.TestCompressedFormats.test_compressed_formats failing with
{code}
query_test/test_compressed_formats.py:83: in test_compressed_formats
'tinytable', db_suffix, suffix, '00_0', extension)
query_test/test_compressed_formats.py:114: in _copy_and_query_compressed_file
self.filesystem_client.copy(src_file, dest_file, overwrite=True)
/data/jenkins/workspace/impala-cdpd-master-exhaustive-release/repos/Impala/tests/util/hdfs_util.py:79:
 in copy
self.hdfs_filesystem_client.copy(src, dst, overwrite)
/data/jenkins/workspace/impala-cdpd-master-exhaustive-release/repos/Impala/tests/util/hdfs_util.py:241:
 in copy
'{0} copy failed: '.format(self.filesystem_type) + stderr + "; " + stdout
E   AssertionError: HDFS copy failed: cp: 
`/test-warehouse/managed/tinytable_snap_copy/00_0.snappy': No such file or 
directory: 
`hdfs://localhost:20500/test-warehouse/managed/tinytable_snap_copy/00_0.snappy'
E   ;
{code}

TestInsertQueries.test_insert_mem_limit fails with
{code}
query_test/test_insert.py:168: in test_insert_mem_limit
multiple_impalad=vector.get_value('exec_option')['sync_ddl'] == 1)
/data/jenkins/workspace/impala-cdpd-master-exhaustive-release/repos/Impala/tests/common/impala_test_suite.py:662:
 in run_test_case
self.__verify_exceptions(test_section['CATCH'], str(e), use_db)
/data/jenkins/workspace/impala-cdpd-master-exhaustive-release/repos/Impala/tests/common/impala_test_suite.py:481:
 in __verify_exceptions
(expected_str, actual_str)
E   AssertionError: Unexpected exception string. Expected: Memory limit exceeded
E   Not found in actual: ImpalaBeeswaxException: Query aborted:Writing to 
compressed text table is not supported.
{code}

I am guessing these failures are related


 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9681) LdapImpalaShellTest.testShellLdapAuth failed

2020-04-20 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9681:
--

 Summary: LdapImpalaShellTest.testShellLdapAuth failed
 Key: IMPALA-9681
 URL: https://issues.apache.org/jira/browse/IMPALA-9681
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


{code}
Query submitted at: 2020-04-20 11:56:08 (Coordinator: 
http://impala-ec2-centos74-m5-4xlarge-ondemand-0ed7.vpc.cloudera.com:25000)
Query progress can be monitored at: 
http://impala-ec2-centos74-m5-4xlarge-ondemand-0ed7.vpc.cloudera.com:25000/query_plan?query_id=1c46b499d56c54f4:bf545cf0
Fetched 1 row(s) in 0.42s

at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.impala.customcluster.RunShellCommand.Run(RunShellCommand.java:50)
at 
org.apache.impala.customcluster.LdapImpalaShellTest.testShellLdapAuth(LdapImpalaShellTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.apache.directory.server.core.integ.CreateLdapServerRule$2.evaluate(CreateLdapServerRule.java:117)
at 
org.apache.directory.server.core.integ.CreateDsRule$2.evaluate(CreateDsRule.java:123)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (IMPALA-9666) TestImpalaShellInteractive fails in a misleading way

2020-04-21 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman closed IMPALA-9666.
--
Resolution: Fixed

Note that the actual commit mentioned here is empty. The concurrent change 
{code}
IMPALA-3343, IMPALA-9489: Make impala-shell compatible with python 3
{code}
made a simultaneous fix, and gerrit resolved this conflict by making an empty 
change.

> TestImpalaShellInteractive fails in a misleading way
> 
>
> Key: IMPALA-9666
> URL: https://issues.apache.org/jira/browse/IMPALA-9666
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> If the test fails in _wait_for_num_open_sessions then the failure says
> {code}
> E   TypeError: not all arguments converted during string formatting
> {code}
> which is the result of 
> {code}
>   LOG.exception("Error: " % err)
> {code}
> which should be
> {code}
>   LOG.exception("Error: %s" % err)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9745) SELECT from view fails with "AnalysisException: No matching function with signature: to_timestamp(TIMESTAMP, STRING)" after expression rewrite.

2020-05-13 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9745:
--

 Summary: SELECT from view fails with "AnalysisException: No 
matching function with signature: to_timestamp(TIMESTAMP, STRING)" after 
expression rewrite.
 Key: IMPALA-9745
 URL: https://issues.apache.org/jira/browse/IMPALA-9745
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 2.11.0, Impala 4.0
Reporter: Andrew Sherman


Simple test case

{code}
drop view if exists test_replication_view;
drop table if exists test_replication;
create table test_replication(cob string);
insert into test_replication values('2018-06-07');
insert into test_replication values('2018-06-07');
insert into test_replication values('2018-06-07');
insert into test_replication values('2018-06-08');
select * from test_replication;

create view test_replication_view as select to_timestamp(cob, '-MM-dd') 
cob_ts,cob trade_date from test_replication;
select 1 from test_replication_view deal WHERE trade_date = deal.cob_ts AND 
deal.cob_ts = '2018-06-07';
{code}

The problem seems to be that after expression rewrite the type of cob has 
become a timestamp and so we look for the function "to_timestamp(TIMESTAMP, 
STRING)" instead of "to_timestamp(STRING, STRING)".

A workaround is to run with
{code}
set enable_expr_rewrites=false;
{code}

For comparison a similar query runs OK in mysql

{code}
drop view if exists test_replication_view;
drop table if exists test_replication;
create table test_replication(cob varchar(255));
insert into test_replication values('2018-06-07');
insert into test_replication values('2018-06-07');
insert into test_replication values('2018-06-07');
insert into test_replication values('2018-06-08');
select * from test_replication;

create view test_replication_view as select str_to_date(cob, '%Y-%m-%d') 
cob_ts,cob trade_date from test_replication;
select 1 from test_replication_view deal WHERE trade_date = deal.cob_ts AND 
deal.cob_ts = '2018-06-07'
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9909) Print body of http error code in Impala Shell.

2020-06-29 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-9909:
--

 Summary: Print body of http error code in Impala Shell.
 Key: IMPALA-9909
 URL: https://issues.apache.org/jira/browse/IMPALA-9909
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


Make Impala Shell closer to Impyla by printing the body of any http error code 
message received when using hs2-over-http. The common case is that there is 
nothing in the body, in which case the behavior is unchanged.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10000) Imapal team should celebrate

2020-07-23 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-1:
---

 Summary: Imapal team should celebrate
 Key: IMPALA-1
 URL: https://issues.apache.org/jira/browse/IMPALA-1
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-9909) Print body of http error code in Impala Shell.

2020-08-14 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-9909.

Resolution: Fixed

> Print body of http error code in Impala Shell.
> --
>
> Key: IMPALA-9909
> URL: https://issues.apache.org/jira/browse/IMPALA-9909
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> Make Impala Shell closer to Impyla by printing the body of any http error 
> code message received when using hs2-over-http. The common case is that there 
> is nothing in the body, in which case the behavior is unchanged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10244) Make non-scalable failures to dequeue observable

2020-10-15 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10244:
---

 Summary: Make non-scalable failures to dequeue observable
 Key: IMPALA-10244
 URL: https://issues.apache.org/jira/browse/IMPALA-10244
 Project: IMPALA
  Issue Type: Improvement
Reporter: Andrew Sherman
Assignee: Andrew Sherman


One of the important ways to observe Impala throughput is by looking at when 
queries are queued. This can be an indication that more resources should be 
added to the cluster by adding more executor groups. This is only a good 
strategy if adding more resources will help with the current workload. In some 
situations the head of the query queue cannot be executed because of resource 
constraints on the coordinator. 

{code}
Could not dequeue query id=89440761b4c2:261de886 reason: Not enough 
memory available on host coordinator-0:22050. Needed 104.02 MB but only 103.16 
MB out of 5.18 GB was available.
{code}
or
{code}
Could not dequeue query id=6d48e94edd686f8f:a114d873 reason: Not enough 
admission control slots available on host coordinator-0:22050. Needed 1 slots 
but 30/30 are already in use.
{code}

In these cases the coordinator is the bottleneck so adding more executor groups 
will not help. This jira is to make these cases observable by adding a new 
counter which is incremented when a dequeue fails because of resource 
constraints that would not be resolved by adding more executor groups.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-9240) Impala shell using hs2-http reports all http error codes as EOFError

2020-10-19 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-9240.

Resolution: Fixed

Thanks [~tarmstrong] for pointing this out

> Impala shell using hs2-http reports all http error codes as EOFError
> 
>
> Key: IMPALA-9240
> URL: https://issues.apache.org/jira/browse/IMPALA-9240
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> For example if I try to connect to an http endpoint that returns 503 I see
> {code}
> $ impala-shell -V --protocol='hs2-http' -i "localhost:28000"
> Starting Impala Shell without Kerberos authentication
> Warning: --connect_timeout_ms is currently ignored with HTTP transport.
> Opened TCP connection to localhost:28000
> Error connecting: EOFError, 
> {code}
> At present Impala shell does not properly check http return calls. 
> When this fix is complete it should be also put into Impyla.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-10244) Make non-scalable failures to dequeue observable

2020-10-23 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-10244.
-
Resolution: Fixed

> Make non-scalable failures to dequeue observable
> 
>
> Key: IMPALA-10244
> URL: https://issues.apache.org/jira/browse/IMPALA-10244
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> One of the important ways to observe Impala throughput is by looking at when 
> queries are queued. This can be an indication that more resources should be 
> added to the cluster by adding more executor groups. This is only a good 
> strategy if adding more resources will help with the current workload. In 
> some situations the head of the query queue cannot be executed because of 
> resource constraints on the coordinator. 
> {code}
> Could not dequeue query id=89440761b4c2:261de886 reason: Not 
> enough memory available on host coordinator-0:22050. Needed 104.02 MB but 
> only 103.16 MB out of 5.18 GB was available.
> {code}
> or
> {code}
> Could not dequeue query id=6d48e94edd686f8f:a114d873 reason: Not 
> enough admission control slots available on host coordinator-0:22050. Needed 
> 1 slots but 30/30 are already in use.
> {code}
> In these cases the coordinator is the bottleneck so adding more executor 
> groups will not help. This jira is to make these cases observable by adding a 
> new counter which is incremented when a dequeue fails because of resource 
> constraints that would not be resolved by adding more executor groups.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10309) Use sleep time from a Retry-After header in Impala Shell

2020-11-03 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10309:
---

 Summary: Use sleep time from a Retry-After header in Impala Shell 
 Key: IMPALA-10309
 URL: https://issues.apache.org/jira/browse/IMPALA-10309
 Project: IMPALA
  Issue Type: Improvement
Reporter: Andrew Sherman
Assignee: Andrew Sherman


When Impala Shell receives an http error message (that is a message with
http code greater than or equal to 300), it may sleep for a time before
retrying. If the message contains a 'Retry-After' header that has an
integer value, then this should be used as the time for which to sleep.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-10249) TestImpalaShell.test_queries_closed is flaky

2020-11-18 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-10249.
-
Resolution: Fixed

> TestImpalaShell.test_queries_closed is flaky
> 
>
> Key: IMPALA-10249
> URL: https://issues.apache.org/jira/browse/IMPALA-10249
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Quanlong Huang
>Assignee: Andrew Sherman
>Priority: Critical
>  Labels: broken-build, flaky
>
> Saw a failure in a CORE ASAN build:
> shell.test_shell_commandline.TestImpalaShell.test_queries_closed[table_format_and_file_extension:
>  ('textfile', '.txt') | protocol: hs2-http] (from pytest)
> {code:java}
> /data/jenkins/workspace/impala-cdpd-master-core-asan/repos/Impala/tests/shell/test_shell_commandline.py:365:
>  in test_queries_closed
> assert 0 == impalad_service.get_num_in_flight_queries()
> E   assert 0 == 1
> E+  where 1 =  >()
> E+where  > = 
>  0x7ac8510>.get_num_in_flight_queries {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-10309) Use sleep time from a Retry-After header in Impala Shell

2020-12-22 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-10309.
-
Resolution: Fixed

Yes, this is fixed thanks [~tarmstrong]

> Use sleep time from a Retry-After header in Impala Shell 
> -
>
> Key: IMPALA-10309
> URL: https://issues.apache.org/jira/browse/IMPALA-10309
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Clients
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> When Impala Shell receives an http error message (that is a message with
> http code greater than or equal to 300), it may sleep for a time before
> retrying. If the message contains a 'Retry-After' header that has an
> integer value, then this should be used as the time for which to sleep.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-9540) Impala Shell sends duplicate "Host" headers in http mode

2020-12-22 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-9540.

Resolution: Fixed

> Impala Shell sends duplicate "Host" headers in http mode
> 
>
> Key: IMPALA-9540
> URL: https://issues.apache.org/jira/browse/IMPALA-9540
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> Running "impala-shell --protocol='hs2-http' ..." the impala shell sends two 
> copies of the "Host" header. When connecting to the minicluster the values are
> {code}
> Host: “localhost”
> Host: “localhost:8080”
> {code}
> This is bad because the http server in golang's net/http package will reject 
> any http call with 
> {code}
> badRequestError("too many Host headers")
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10447) Missing \n every 1024 or 2048 lines when exporting output from shell to a file

2021-01-18 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10447:
---

 Summary: Missing \n every 1024 or 2048 lines when exporting output 
from shell to a file
 Key: IMPALA-10447
 URL: https://issues.apache.org/jira/browse/IMPALA-10447
 Project: IMPALA
  Issue Type: Bug
  Components: Clients
Affects Versions: Impala 4.0
Reporter: Andrew Sherman
Assignee: Andrew Sherman


When Impala shell exports output to a file, for example 
{code:java}
impala-shell -B  -V  -q "select * from tpcds.item" -o filex.csv 
--output_delimiter=';'
 {code}
then every 1024 (or maybe 2048) rows a newline is missing.

I think the problem is here: 
https://github.com/apache/impala/blob/9bb7157bf014282c95ab3e233b80d77e00c95b52/shell/shell_output.py#L119
where we now use write instead of print to output the rows.
It may be sufficient to add a
{code}
out_file.write('\n')
{code} 
here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10587) TestRanger.test_show_grant failed in ASAN build

2021-03-15 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10587:
---

 Summary: TestRanger.test_show_grant failed in ASAN build
 Key: IMPALA-10587
 URL: https://issues.apache.org/jira/browse/IMPALA-10587
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 4.0
Reporter: Andrew Sherman
Assignee: Fang-Yu Rao


h3. Error Message

assert [['USER', 'je... '', '', ...]] == [] Left contains more items, first 
extra item: ['USER', 'jenkins', 'test_show_grant_cf8fca4c_db', '', '', '', ...] 
Full diff: + [] - [['USER', - 'jenkins', - 'test_show_grant_cf8fca4c_db', - '', 
- '', - '', - '*', - 'select', - 'false']]
h3. Stacktrace

authorization/test_ranger.py:212: in test_show_grant unique_table) 
authorization/test_ranger.py:331: in _test_show_grant_basic 
TestRanger._check_privileges(result, []) authorization/test_ranger.py:728: in 
_check_privileges assert map(columns, result.data) == expected E assert 
[['USER', 'je... '', '', ...]] == [] E Left contains more items, first extra 
item: ['USER', 'jenkins', 'test_show_grant_cf8fca4c_db', '', '', '', ...] E 
Full diff: E + [] E - [['USER', E - 'jenkins', E - 
'test_show_grant_cf8fca4c_db', E - '', E - '', E - '', E - '*', E - 'select', E 
- 'false']]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10588) PlannerTest/resource-requirements.test fails with bad mem estimates (from number of files?)

2021-03-16 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10588:
---

 Summary: PlannerTest/resource-requirements.test fails with bad mem 
estimates (from number of files?)
 Key: IMPALA-10588
 URL: https://issues.apache.org/jira/browse/IMPALA-10588
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 4.0
Reporter: Andrew Sherman


We see an unexpected plan in the plan for "select * from tpch_orc_def.lineitem" 
with Hive v3.

The first line to diff is 

{code}
Per-Host Resource Estimates: Memory=188MB 
^
{code}

but in the scan we see
{code}
HDFS partitions=1/1 files=1 size=142.84MB
{code}
instead of the expected 
{code}
HDFS partitions=1/1 files=5 size=142.72MB
{code}
Could this be a regression from the recent change IMPALA-10503 which changed 
data loading?

{code}
Section PLAN of query:
select * from tpch_orc_def.lineitem

Actual does not match expected result:
Max Per-Host Resource Reservation: Memory=12.00MB Threads=2
Per-Host Resource Estimates: Memory=188MB
^
Analyzed query: SELECT * FROM tpch_orc_def.lineitem

F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
|  Per-Host Resources: mem-estimate=188.00MB mem-reservation=12.00MB 
thread-reservation=2
PLAN-ROOT SINK
|  output exprs: tpch_orc_def.lineitem.l_orderkey, 
tpch_orc_def.lineitem.l_partkey, tpch_orc_def.lineitem.l_suppkey, 
tpch_orc_def.lineitem.l_linenumber, tpch_orc_def.lineitem.l_quantity, 
tpch_orc_def.lineitem.l_extendedprice, tpch_orc_def.lineitem.l_discount, 
tpch_orc_def.lineitem.l_tax, tpch_orc_def.lineitem.l_returnflag, 
tpch_orc_def.lineitem.l_linestatus, tpch_orc_def.lineitem.l_shipdate, 
tpch_orc_def.lineitem.l_commitdate, tpch_orc_def.lineitem.l_receiptdate, 
tpch_orc_def.lineitem.l_shipinstruct, tpch_orc_def.lineitem.l_shipmode, 
tpch_orc_def.lineitem.l_comment
|  mem-estimate=100.00MB mem-reservation=4.00MB spill-buffer=2.00MB 
thread-reservation=0
|
00:SCAN HDFS [tpch_orc_def.lineitem]
   HDFS partitions=1/1 files=1 size=142.84MB
   stored statistics:
 table: rows=6.00M size=142.84MB
 columns: all
   extrapolated-rows=disabled max-scan-range-rows=6.00M
   mem-estimate=88.00MB mem-reservation=8.00MB thread-reservation=1
   tuple-ids=0 row-size=231B cardinality=6.00M
   in pipelines: 00(GETNEXT)

Expected:
Max Per-Host Resource Reservation: Memory=12.00MB Threads=2
Per-Host Resource Estimates: Memory=140MB
Analyzed query: SELECT * FROM tpch_orc_def.lineitem

F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
|  Per-Host Resources: mem-estimate=140.00MB mem-reservation=12.00MB 
thread-reservation=2
PLAN-ROOT SINK
|  output exprs: tpch_orc_def.lineitem.l_orderkey, 
tpch_orc_def.lineitem.l_partkey, tpch_orc_def.lineitem.l_suppkey, 
tpch_orc_def.lineitem.l_linenumber, tpch_orc_def.lineitem.l_quantity, 
tpch_orc_def.lineitem.l_extendedprice, tpch_orc_def.lineitem.l_discount, 
tpch_orc_def.lineitem.l_tax, tpch_orc_def.lineitem.l_returnflag, 
tpch_orc_def.lineitem.l_linestatus, tpch_orc_def.lineitem.l_shipdate, 
tpch_orc_def.lineitem.l_commitdate, tpch_orc_def.lineitem.l_receiptdate, 
tpch_orc_def.lineitem.l_shipinstruct, tpch_orc_def.lineitem.l_shipmode, 
tpch_orc_def.lineitem.l_comment
|  mem-estimate=100.00MB mem-reservation=4.00MB spill-buffer=2.00MB 
thread-reservation=0
|
00:SCAN HDFS [tpch_orc_def.lineitem]
   HDFS partitions=1/1 files=5 size=142.72MB
   stored statistics:
 table: rows=6.00M size=142.72MB
 columns: all
   extrapolated-rows=disabled max-scan-range-rows=1.73M
   mem-estimate=40.00MB mem-reservation=8.00MB thread-reservation=1
   tuple-ids=0 row-size=231B cardinality=6.00M
   in pipelines: 00(GETNEXT)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10592) Exhaustive tests timeout after 20 hours

2021-03-17 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10592:
---

 Summary: Exhaustive tests timeout after 20 hours
 Key: IMPALA-10592
 URL: https://issues.apache.org/jira/browse/IMPALA-10592
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 4.0
Reporter: Andrew Sherman
 Attachments: catalogd_8661_20210317-045247.txt, 
hms_16762_jstack_20210317-045247.txt, impalad_8744_20210317-045247.txt.gz, 
impalad_8744_jstack_20210317-045312.txt, impalad_8747_20210317-045247.txt.gz, 
impalad_8754_20210317-045247.txt, namenode_10515_jstack_20210317-045247.txt, 
statestored_8645_20210317-045247.txt

The tests seem to make progress for nearly 10 hours, but after 20 hours they 
timeout
{code}
 run-all-tests.sh TIMED OUT! 
{code}
The timeout stack traces are attached

Impala logs show a long period of inactivity between  03/16 16:58 and 03/17 
04:53
For example:
{code}
I0316 16:56:33.555305  9911 impala-server.cc:1996] Catalog topic update applied 
with version: 65701 new min catalog object version: 36078
I0316 16:58:00.504211  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
cache from 6 items, to 5, eviction took: 0
I0316 16:58:10.504297  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
cache from 5 items, to 4, eviction took: 0
I0316 16:58:20.504348  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
cache from 4 items, to 3, eviction took: 0
I0316 16:58:30.504386  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
cache from 3 items, to 2, eviction took: 0
I0316 16:58:40.504467  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
cache from 2 items, to 1, eviction took: 0
I0316 16:58:50.504545  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
cache from 1 items, to 0, eviction took: 0
I0317 04:53:06.368000  9905 TAcceptQueueServer.cpp:340] New connection to 
server StatestoreSubscriber from client 
I0317 04:53:06.368041  9910 thrift-util.cc:96] TSocket::write_partial() send() 
Broken pipe
W0317 04:53:06.369092  8780 init.cc:214] A process pause was detected for 
approximately 18s920ms
I0317 04:53:06.369904  9905 TAcceptQueueServer.cpp:340] New connection to 
server StatestoreSubscriber from client 
I0317 04:53:06.369961  9910 thrift-util.cc:96] TAcceptQueueServer client died: 
write() send(): Broken pipe
W0317 04:53:06.369966  8929 JvmPauseMonitor.java:205] Detected pause in JVM or 
host machine (eg GC): pause of approximately 18338ms
No GCs detected
I0317 04:53:06.370081 27248 thrift-util.cc:96] TSocket::write_partial() send() 
Broken pipe
I0317 04:53:06.370126 27248 thrift-util.cc:96] TAcceptQueueServer client died: 
write() send(): Broken pipe
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10596) TestAdmissionControllerStressWithACService.test_mem_limit fails with "Invalid or unknown query handle" when canceling a query

2021-03-18 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10596:
---

 Summary: TestAdmissionControllerStressWithACService.test_mem_limit 
fails with "Invalid or unknown query handle" when canceling a query
 Key: IMPALA-10596
 URL: https://issues.apache.org/jira/browse/IMPALA-10596
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 4.0
Reporter: Andrew Sherman


TestAdmissionControllerStress.test_mem_limit fails similarly

{code}
custom_cluster/test_admission_controller.py:1437: in teardown
client.cancel(thread.query_handle)
common/impala_connection.py:215: in cancel
return self.__beeswax_client.cancel_query(operation_handle.get_handle())
beeswax/impala_beeswax.py:369: in cancel_query
return self.__do_rpc(lambda: self.imp_service.Cancel(query_id))
beeswax/impala_beeswax.py:520: in __do_rpc
raise ImpalaBeeswaxException(self.__build_error_message(b), b)
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EINNER EXCEPTION: 
EMESSAGE: Invalid or unknown query handle: 
174962332188aac2:1713d0fe.
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-10447) Missing \n every 1024 or 2048 lines when exporting output from shell to a file

2021-03-18 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-10447.
-
Resolution: Fixed

> Missing \n every 1024 or 2048 lines when exporting output from shell to a file
> --
>
> Key: IMPALA-10447
> URL: https://issues.apache.org/jira/browse/IMPALA-10447
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Affects Versions: Impala 4.0
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> When Impala shell exports output to a file, for example 
> {code:java}
> impala-shell -B  -V  -q "select * from tpcds.item" -o filex.csv 
> --output_delimiter=';'{code}
> then every 1024 (or maybe 2048) rows a newline is missing.
> I think the problem is here: 
> [https://github.com/apache/impala/blob/9bb7157bf014282c95ab3e233b80d77e00c95b52/shell/shell_output.py#L119]
>  where we now use write instead of print to output the rows.
>  It may be sufficient to add a
> {code:java}
> out_file.write('\n')
> {code}
> here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-8684) test_mem_limit in test_admission_controller timed out waiting for query to end

2021-03-18 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-8684.

Resolution: Cannot Reproduce

> test_mem_limit in test_admission_controller timed out waiting for query to end
> --
>
> Key: IMPALA-8684
> URL: https://issues.apache.org/jira/browse/IMPALA-8684
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Critical
>  Labels: broken-build, flaky
>
> {code}
> custom_cluster/test_admission_controller.py:1504: in test_mem_limit
> {'request_pool': self.pool_name, 'mem_limit': query_mem_limit})
> custom_cluster/test_admission_controller.py:1403: in run_admission_test
> ['admitted', 'timed-out'], curr_metrics, expected_admitted)
> custom_cluster/test_admission_controller.py:1100: in wait_for_metric_changes
> assert (time() - start_time < STRESS_TIMEOUT),\
> E   AssertionError: Timed out waiting 90 seconds for metrics 
> admitted,timed-out delta 5 current {'dequeued': 5, 'rejected': 14, 
> 'released': 5, 'admitted': 10, 'queued': 11, 'timed-out': 0} initial 
> {'dequeued': 5, 'rejected': 14, 'released': 0, 'admitted': 10, 'queued': 11, 
> 'timed-out': 0}
> E   assert (1560929088.700526 - 1560928997.952595) < 90
> E+  where 1560929088.700526 = time()
> {code}
> IMPALA-8295 previously raised the timeout from 60 seconds to 90 seconds to 
> fix a similar bug.
> This bug happened in a build that did not contain "IMPALA-8536: Add Scalable 
> Pool Configuration to Admission Controller"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-10592) Exhaustive tests timeout after 20 hours

2021-03-23 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-10592.
-
Fix Version/s: Impala 4.0
   Resolution: Fixed

At least one test run has completed without the hang

> Exhaustive tests timeout after 20 hours
> ---
>
> Key: IMPALA-10592
> URL: https://issues.apache.org/jira/browse/IMPALA-10592
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 4.0
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Critical
> Fix For: Impala 4.0
>
> Attachments: catalogd_8661_20210317-045247.txt, 
> hms_16762_jstack_20210317-045247.txt, impalad_8744_20210317-045247.txt.gz, 
> impalad_8744_jstack_20210317-045312.txt, impalad_8747_20210317-045247.txt.gz, 
> impalad_8754_20210317-045247.txt, namenode_10515_jstack_20210317-045247.txt, 
> statestored_8645_20210317-045247.txt
>
>
> The tests seem to make progress for nearly 10 hours, but after 20 hours they 
> timeout
> {code}
>  run-all-tests.sh TIMED OUT! 
> {code}
> The timeout stack traces are attached
> Impala logs show a long period of inactivity between  03/16 16:58 and 03/17 
> 04:53
> For example:
> {code}
> I0316 16:56:33.555305  9911 impala-server.cc:1996] Catalog topic update 
> applied with version: 65701 new min catalog object version: 36078
> I0316 16:58:00.504211  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
> cache from 6 items, to 5, eviction took: 0
> I0316 16:58:10.504297  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
> cache from 5 items, to 4, eviction took: 0
> I0316 16:58:20.504348  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
> cache from 4 items, to 3, eviction took: 0
> I0316 16:58:30.504386  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
> cache from 3 items, to 2, eviction took: 0
> I0316 16:58:40.504467  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
> cache from 2 items, to 1, eviction took: 0
> I0316 16:58:50.504545  9041 krpc-data-stream-mgr.cc:427] Reduced stream ID 
> cache from 1 items, to 0, eviction took: 0
> I0317 04:53:06.368000  9905 TAcceptQueueServer.cpp:340] New connection to 
> server StatestoreSubscriber from client 
> I0317 04:53:06.368041  9910 thrift-util.cc:96] TSocket::write_partial() 
> send() Broken pipe
> W0317 04:53:06.369092  8780 init.cc:214] A process pause was detected for 
> approximately 18s920ms
> I0317 04:53:06.369904  9905 TAcceptQueueServer.cpp:340] New connection to 
> server StatestoreSubscriber from client 
> I0317 04:53:06.369961  9910 thrift-util.cc:96] TAcceptQueueServer client 
> died: write() send(): Broken pipe
> W0317 04:53:06.369966  8929 JvmPauseMonitor.java:205] Detected pause in JVM 
> or host machine (eg GC): pause of approximately 18338ms
> No GCs detected
> I0317 04:53:06.370081 27248 thrift-util.cc:96] TSocket::write_partial() 
> send() Broken pipe
> I0317 04:53:06.370126 27248 thrift-util.cc:96] TAcceptQueueServer client 
> died: write() send(): Broken pipe
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10670) Impala should propagate the X-Request-ID from request to reply messages

2021-04-21 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10670:
---

 Summary: Impala should propagate the X-Request-ID from request to 
reply messages
 Key: IMPALA-10670
 URL: https://issues.apache.org/jira/browse/IMPALA-10670
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


At present when using hs2-over-http Impala does not copy the X-Request-ID 
header into the reply. This would allow tracking of messages at the http level. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10770) Bump python thrift_sasl version to 0.4.3

2021-06-25 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10770:
---

 Summary: Bump python thrift_sasl version to 0.4.3
 Key: IMPALA-10770
 URL: https://issues.apache.org/jira/browse/IMPALA-10770
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


A recent major release of Impyla (0.17.0) uses  thrift_sasl=0.4.3
Increase the version used by Impala (notably by Impala Shell) to match.
This will help clients which use both Impala Shell and Impyla together.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-10849) A LIKE predicate that ends in an escaped wildcard is incorrectly evaluated

2021-08-09 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-10849:
---

 Summary: A LIKE predicate that ends in an escaped wildcard is 
incorrectly evaluated
 Key: IMPALA-10849
 URL: https://issues.apache.org/jira/browse/IMPALA-10849
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Reporter: Andrew Sherman
Assignee: Andrew Sherman


If the last character of a LIKE predicate is an escaped wildcard (e.g. LIKE 
foo\%) then it is incorrectly evaluated. This is because the fast path 
optimizations in LikePrepareInternal treat the predicate as being a search for 
a string with a fixed prefix. If the fast path optimizations are commented out 
then the LIKE is evaluated correctly.

A possible fix would be to make the fast path optimizations recognize that 
escaped wildcards cannot be evaluated by the fixed prefix search.

This is a simpler bug than that discussed in IMPALA-2422 which is to do with 
ambiguities in the logic of unescaping string literals (which is more tricky to 
fix).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-11007) Webserver should not log errors when handling HTTP HEAD

2021-11-05 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-11007:
---

 Summary: Webserver should not log errors when handling HTTP HEAD 
 Key: IMPALA-11007
 URL: https://issues.apache.org/jira/browse/IMPALA-11007
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman
Assignee: Andrew Sherman


If you send a HEAD request  to Impala's webserver, for  example
{code}
curl -I http://localhost:25000/metrics
{code}
then the logs will contain scary messages:
{code}
I1025 10:39:52.337021 3578299 webserver.cc:591] Webserver: error reading: 
Connection reset by peer
{code}
This does not happen with 
{code}
curl http://localhost:25000/metrics
{code}

Fix this by making the Impala webserver not send any content in replies to a 
HEAD message



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-11007) Webserver should not log errors when handling HTTP HEAD

2021-11-10 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-11007.
-
Resolution: Fixed

> Webserver should not log errors when handling HTTP HEAD 
> 
>
> Key: IMPALA-11007
> URL: https://issues.apache.org/jira/browse/IMPALA-11007
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> If you send a HEAD request  to Impala's webserver, for  example
> {code}
> curl -I http://localhost:25000/metrics
> {code}
> then the logs will contain scary messages:
> {code}
> I1025 10:39:52.337021 3578299 webserver.cc:591] Webserver: error reading: 
> Connection reset by peer
> {code}
> This does not happen with 
> {code}
> curl http://localhost:25000/metrics
> {code}
> Fix this by making the Impala webserver not send any content in replies to a 
> HEAD message



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (IMPALA-11015) TestWebPage.test_catalog fails after a ConcurrentModificationException

2021-11-10 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-11015:
---

 Summary: TestWebPage.test_catalog fails after a 
ConcurrentModificationException
 Key: IMPALA-11015
 URL: https://issues.apache.org/jira/browse/IMPALA-11015
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


The failure is 
{code}
webserver/test_web_pages.py:296: in test_catalog
self.get_and_check_status_jvm(self.CATALOG_URL, "foo_part")
webserver/test_web_pages.py:186: in get_and_check_status_jvm
ports_to_test=self.TEST_PORTS_WITHOUT_SS)
webserver/test_web_pages.py:171: in get_and_check_status
assert string_to_search in response.text
{code}
The assertion failure includes the html of the page where the text was not 
found. In teh text of the page I see
{code}

Error:
ConcurrentModificationException: null

{code}
In the logs for the test run I find
{code}
@  0x1fb81e7  impala::Status::Status()
@  0x29a0899  impala::JniUtil::GetJniExceptionMsg()
@  0x1f847b6  impala::JniCall::Call<>()
@  0x1f81b25  impala::JniUtil::CallJniMethod<>()
@  0x1f7f65d  impala::Catalog::GetDbs()
@  0x1f38b4e  impala::CatalogServer::CatalogUrlCallback()
@  0x1f3e421  
_ZZN6impala13CatalogServer16RegisterWebpagesEPNS_9WebserverEENKUlRKT_PT0_E0_clIN4kudu19WebCallbackRegistry10WebRequestEN9rapidjson15GenericDocumentINSD_4UTF8IcEENSD_19MemoryPoolAllocatorINSD_12CrtAllocatorEEESI_DaS5_S7_
@  0x1f3e471  
_ZN5boost6detail8function26void_function_obj_invoker2IZN6impala13CatalogServer16RegisterWebpagesEPNS3_9WebserverEEUlRKT_PT0_E0_vRKN4kudu19WebCallbackRegistry10WebRequestEPN9rapidjson15GenericDocumentINSI_4UTF8IcEENSI_19MemoryPoolAllocatorINSI_12CrtAllocatorEEESN_EEE6invokeERNS1_15function_bufferESH_SQ_
@  0x2aad07d  boost::function2<>::operator()()
@  0x2aaa14e  impala::Webserver::RenderUrlWithTemplate()
@  0x2aa8045  impala::Webserver::BeginRequestCallback()
@  0x2aa663a  impala::Webserver::BeginRequestCallbackStatic()
@  0x2abe5cd  handle_request
@  0x2ac07fb  process_new_connection
@  0x2ac0ebf  worker_thread
@ 0x7f921f33be24  start_thread
@ 0x7f921bd6a34c  __clone
{code}
which looks like webserver rendering code, and
{code}
I1107 09:27:37.416805 30162 jni-util.cc:286] 
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)
at java.util.HashMap$EntryIterator.next(HashMap.java:1471)
at java.util.HashMap$EntryIterator.next(HashMap.java:1469)
at 
org.apache.hadoop.hive.metastore.api.Database$DatabaseStandardScheme.write(Database.java:1188)
at 
org.apache.hadoop.hive.metastore.api.Database$DatabaseStandardScheme.write(Database.java:1051)
at 
org.apache.hadoop.hive.metastore.api.Database.write(Database.java:920)
at 
org.apache.impala.thrift.TDatabase$TDatabaseStandardScheme.write(TDatabase.java:430)
at 
org.apache.impala.thrift.TDatabase$TDatabaseStandardScheme.write(TDatabase.java:378)
at org.apache.impala.thrift.TDatabase.write(TDatabase.java:316)
at 
org.apache.impala.thrift.TGetDbsResult$TGetDbsResultStandardScheme.write(TGetDbsResult.java:362)
at 
org.apache.impala.thrift.TGetDbsResult$TGetDbsResultStandardScheme.write(TGetDbsResult.java:310)
at org.apache.impala.thrift.TGetDbsResult.write(TGetDbsResult.java:264)
at org.apache.thrift.TSerializer.serialize(TSerializer.java:79)
at org.apache.impala.service.JniCatalog.getDbs(JniCatalog.java:283)
{code}
So I think the rendering code hit a ConcurrentModificationException



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (IMPALA-11016) load_nested fails with Hive exception in BlockManager.chooseTarget4NewBlock running 'CREATE EXTERNAL TABLE region ...'

2021-11-11 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-11016:
---

 Summary: load_nested fails with Hive exception in 
BlockManager.chooseTarget4NewBlock running 'CREATE EXTERNAL TABLE region ...' 
 Key: IMPALA-11016
 URL: https://issues.apache.org/jira/browse/IMPALA-11016
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


Failure is
{code}
2021-11-10 13:42:55,781 INFO:load_nested[348]:Executing: 

  CREATE EXTERNAL TABLE region
  STORED AS parquet
  TBLPROPERTIES('parquet.compression' = 
'SNAPPY','external.table.purge'='TRUE')
  AS SELECT * FROM tmp_region
Traceback (most recent call last):
  File 
"/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/testdata/bin/load_nested.py",
 line 415, in 
load()
  File 
"/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/testdata/bin/load_nested.py",
 line 349, in load
hive.execute(stmt)
  File 
"/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/tests/comparison/db_connection.py",
 line 206, in execute
return self._cursor.execute(sql, *args, **kwargs)
  File 
"/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/infra/python/env-gcc7.5.0/lib/python2.7/site-packages/impala/hiveserver2.py",
 line 331, in execute
self._wait_to_finish()  # make execute synchronous
  File 
"/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/infra/python/env-gcc7.5.0/lib/python2.7/site-packages/impala/hiveserver2.py",
 line 412, in _wait_to_finish
raise OperationalError(resp.errorMessage)
impala.error.OperationalError: Error while compiling statement: FAILED: 
Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
ERROR in 
/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/testdata/bin/create-load-data.sh
 at line 48:
Generated: 
/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/logs/extra_junit_xml_logs/generate_junitxml.buildall.create-load-data.2020_21_43_04.xml
+ echo 'buildall.sh ' -release -format '-testdata failed.'
buildall.sh  -release -format -testdata failed.
{code}
hive-server2.log shows
{code}
2021-11-10T13:43:03,381 ERROR [HiveServer2-Background-Pool: Thread-18405] 
tez.TezTask: Failed to execute tez graph.
org.apache.tez.dag.api.TezUncheckedException: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/tmp/hive/jenkins/_tez_session_dir/b191f8aa-6c28-447c-b246-1d7e38c0b3e0/.tez/application_1636579095895_0038/recovery/1/summary
 could only be written to 0 of the 1 minReplication nodes. There are 3 
datanode(s) running and 3 node(s) are excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2280)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2827)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:874)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)

at org.apache.tez.dag.app.DAGAppMaster.startDAG(DAGAppMaster.java:2545)
at 
org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1364)
at 
org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:145)
at 
org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:184)
at 
org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7648)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
at org.apache.ha

[jira] [Created] (IMPALA-11017) Intermittent crash in DCHECK(read_iter->read_page_->attached_to_output_batch) in BufferedTupleStream::NextReadPage

2021-11-12 Thread Andrew Sherman (Jira)
Andrew Sherman created IMPALA-11017:
---

 Summary: Intermittent crash in 
DCHECK(read_iter->read_page_->attached_to_output_batch) in 
BufferedTupleStream::NextReadPage
 Key: IMPALA-11017
 URL: https://issues.apache.org/jira/browse/IMPALA-11017
 Project: IMPALA
  Issue Type: Bug
Reporter: Andrew Sherman


Stack is 
{code:java}
#1  0x7efedd0078e8 in abort () from /lib64/libc.so.6
#2  0x055471e4 in google::DumpStackTraceAndExit() ()
#3  0x0553c5dd in google::LogMessage::Fail() ()
#4  0x0553decd in google::LogMessage::SendToLog() ()
#5  0x0553bf3b in google::LogMessage::Flush() ()
#6  0x0553fb39 in google::LogMessageFatal::~LogMessageFatal() ()
#7  0x02fb8e1c in impala::BufferedTupleStream::NextReadPage 
(this=0x1dd04c60, read_iter=0x1dd04d10) at 
/data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/buffered-tuple-stream.cc:531
#8  0x02fba81e in impala::BufferedTupleStream::UnpinStream 
(this=0x1dd04c60, mode=impala::BufferedTupleStream::UNPIN_ALL_EXCEPT_CURRENT) 
at 
/data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/buffered-tuple-stream.cc:735
#9  0x03028f5a in impala::SpillableRowBatchQueue::AddBatch 
(this=0x26e1f000, batch=0x16288f450) at 
/data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/spillable-row-batch-queue.cc:91
#10 0x02bef10d in impala::BufferedPlanRootSink::Send (this=0x168544d00, 
state=0x168542d80, batch=0x16288f450) at 
/data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/exec/buffered-plan-root-sink.cc:93
#11 0x02537886 in impala::FragmentInstanceState::ExecInternal 
(this=0x1163cea40) at 
/data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/fragment-instance-state.cc:452
{code}
 Stack trace line
{code:java}
#7  0x02fb8e1c in impala::BufferedTupleStream::NextReadPage 
(this=0x1dd04c60, read_iter=0x1dd04d10) at 
/data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/buffered-tuple-stream.cc:531
{code}
points to
[https://github.infra.cloudera.com/CDH/Impala/blob/640dc51642e67fbdc86285f66f8b9fe1a6f40e96/be/src/runtime/buffered-tuple-stream.cc#L531]
which DCHECK(read_iter->read_page_->attached_to_output_batch);
Is same as description for IMPALA-10559 (TestScratchLimit seems flaky).
The branch where this occurs includes the fix for IMPALA-10559 so I am creating 
a new bug.

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (IMPALA-11017) Intermittent crash in DCHECK(read_iter->read_page_->attached_to_output_batch) in BufferedTupleStream::NextReadPage

2021-11-12 Thread Andrew Sherman (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved IMPALA-11017.
-
Resolution: Duplicate

Thansk [~rizaon] I somehow got confused about this.  Closing as a duplicate

> Intermittent crash in DCHECK(read_iter->read_page_->attached_to_output_batch) 
> in BufferedTupleStream::NextReadPage
> --
>
> Key: IMPALA-11017
> URL: https://issues.apache.org/jira/browse/IMPALA-11017
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Riza Suminto
>Priority: Critical
>
> Stack is 
> {code:java}
> #1  0x7efedd0078e8 in abort () from /lib64/libc.so.6
> #2  0x055471e4 in google::DumpStackTraceAndExit() ()
> #3  0x0553c5dd in google::LogMessage::Fail() ()
> #4  0x0553decd in google::LogMessage::SendToLog() ()
> #5  0x0553bf3b in google::LogMessage::Flush() ()
> #6  0x0553fb39 in google::LogMessageFatal::~LogMessageFatal() ()
> #7  0x02fb8e1c in impala::BufferedTupleStream::NextReadPage 
> (this=0x1dd04c60, read_iter=0x1dd04d10) at 
> /data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/buffered-tuple-stream.cc:531
> #8  0x02fba81e in impala::BufferedTupleStream::UnpinStream 
> (this=0x1dd04c60, mode=impala::BufferedTupleStream::UNPIN_ALL_EXCEPT_CURRENT) 
> at 
> /data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/buffered-tuple-stream.cc:735
> #9  0x03028f5a in impala::SpillableRowBatchQueue::AddBatch 
> (this=0x26e1f000, batch=0x16288f450) at 
> /data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/spillable-row-batch-queue.cc:91
> #10 0x02bef10d in impala::BufferedPlanRootSink::Send 
> (this=0x168544d00, state=0x168542d80, batch=0x16288f450) at 
> /data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/exec/buffered-plan-root-sink.cc:93
> #11 0x02537886 in impala::FragmentInstanceState::ExecInternal 
> (this=0x1163cea40) at 
> /data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/fragment-instance-state.cc:452
> {code}
>  Stack trace line
> {code:java}
> #7  0x02fb8e1c in impala::BufferedTupleStream::NextReadPage 
> (this=0x1dd04c60, read_iter=0x1dd04d10) at 
> /data/jenkins/workspace/impala-cdpd-master-exhaustive/repos/Impala/be/src/runtime/buffered-tuple-stream.cc:531
> {code}
> points to
> [https://github.infra.cloudera.com/CDH/Impala/blob/640dc51642e67fbdc86285f66f8b9fe1a6f40e96/be/src/runtime/buffered-tuple-stream.cc#L531]
> which is DCHECK(read_iter->read_page_->attached_to_output_batch);
> This look very similar to the description for IMPALA-10559 (TestScratchLimit 
> seems flaky), but the branch where this occurs includes the fix for 
> IMPALA-10559, so I am creating a new bug.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   >