[jira] [Created] (IMPALA-8151) HiveUdfCall assumes StringValue is 16 bytes

2019-02-01 Thread Tim Armstrong (JIRA)
Tim Armstrong created IMPALA-8151:
-

 Summary: HiveUdfCall assumes StringValue is 16 bytes
 Key: IMPALA-8151
 URL: https://issues.apache.org/jira/browse/IMPALA-8151
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 3.2.0
Reporter: Tim Armstrong
Assignee: Pooja Nilangekar


HiveUdfCall has the sizes of internal types hardcoded as magic numbers:
{code}
  switch (GetChild(i)->type().type) {
case TYPE_BOOLEAN:
case TYPE_TINYINT:
  // Using explicit sizes helps the compiler unroll memcpy
  memcpy(input_ptr, v, 1);
  break;
case TYPE_SMALLINT:
  memcpy(input_ptr, v, 2);
  break;
case TYPE_INT:
case TYPE_FLOAT:
  memcpy(input_ptr, v, 4);
  break;
case TYPE_BIGINT:
case TYPE_DOUBLE:
  memcpy(input_ptr, v, 8);
  break;
case TYPE_TIMESTAMP:
case TYPE_STRING:
case TYPE_VARCHAR:
  memcpy(input_ptr, v, 16);
  break;
default:
  DCHECK(false) << "NYI";
  }
{code}

STRING and VARCHAR were only 16 bytes because of padding. This padding is 
removed by IMPALA-7367, so this will read past the end of the actual value. 
This could in theory lead to a crash.

We need to change the value, but we should probably also switch to 
sizeof(StringValue) so that it doesn't get broken by similar changes in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-7641) Memory Limit Exceeded

2019-02-01 Thread Tim Armstrong (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-7641.
---
Resolution: Cannot Reproduce

> Memory Limit Exceeded
> -
>
> Key: IMPALA-7641
> URL: https://issues.apache.org/jira/browse/IMPALA-7641
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 2.6.4
>Reporter: Ahshan
>Priority: Minor
>  Labels: resource-management
> Attachments: profile(8).txt
>
>
> We are using CDH distribution with impala version -impalad version 
> 2.6.0-cdh5.8.2 RELEASE 
>  
> As per my understanding memory requirement is of 288 MB and we have an total 
> of 18 Impala Daemons which sum upto 5184MB of total memory consumption 
> considering the above details, it should not lead to an memory issue when the 
> MEM_LIMIT is set to 20GB
> Hence , Could you please let us know the cause of memory limit exceeding 
> select * from emp_sales where job_id = 55451 and uploaded_month = 201808 
> limit 1
>  +---+
> |Explain String|
> +---+
> |Estimated Per-Host Requirements: Memory=288.00MB VCores=1|
> | |
> |01:EXCHANGE [UNPARTITIONED]|
> | |limit: 1|
> | | |
> |00:SCAN HDFS [fenet5.hmig_os_changes_details_malicious]|
> |partitions=1/25 files=3118 size=110.01GB|
> |predicates: job_id = 55451|
> |limit: 1|
> +---+
> WARNINGS: 
>  Memory limit exceeded
>  HdfsParquetScanner::ReadDataPage() failed to allocate 269074889 bytes for 
> dictionary.
> Memory Limit Exceeded
>  HDFS_SCAN_NODE (id=0) could not allocate 257.23 MB without exceeding limit.
>  Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB 
> Consumption=20.00 GB
>  Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
>  HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
>  DataStreamSender: Consumption=1.45 KB
>  Block Manager: Limit=16.00 GB Consumption=0
>  Memory Limit Exceeded
>  HDFS_SCAN_NODE (id=0) could not allocate 255.63 MB without exceeding limit.
>  Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB 
> Consumption=20.00 GB
>  Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
>  HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
>  DataStreamSender: Consumption=1.45 KB
>  Block Manager: Limit=16.00 GB Consumption=0
>  Memory Limit Exceeded
>  HDFS_SCAN_NODE (id=0) could not allocate 255.27 MB without exceeding limit.
>  Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB 
> Consumption=20.00 GB
>  Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
>  HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
>  DataStreamSender: Consumption=1.45 KB
>  Block Manager: Limit=16.00 GB Consumption=0
>  Memory Limit Exceeded
>  HDFS_SCAN_NODE (id=0) could not allocate 255.39 MB without exceeding limit.
>  Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB 
> Consumption=20.00 GB
>  Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
>  HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
>  DataStreamSender: Consumption=1.45 KB
>  Block Manager: Limit=16.00 GB Consumption=0
>  Memory Limit Exceeded
>  HDFS_SCAN_NODE (id=0) could not allocate 16.09 KB without exceeding limit.
>  Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB 
> Consumption=19.74 GB
>  Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.74 GB
>  HDFS_SCAN_NODE (id=0): Consumption=19.74 GB
>  DataStreamSender: Consumption=1.45 KB
>  Block Manager: Limit=16.00 GB Consumption=0
>  Memory Limit Exceeded
>  HDFS_SCAN_NODE (id=0) could not allocate 15.20 KB without exceeding limit.
>  Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB 
> Consumption=19.64 GB
>  Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.64 GB
>  HDFS_SCAN_NODE (id=0): Consumption=19.64 GB
>  DataStreamSender: Consumption=1.45 KB
>  Block Manager: Limit=16.00 GB Consumption=0
>  Memory Limit Exceeded
>  HDFS_SCAN_NODE (id=0) could not allocate 14.61 KB without exceeding limit.
>  Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB 
> Consumption=19.64 GB
>  Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.64 GB
>  HDFS_SCAN_NODE (id=0): Consumption=19.64 GB
>  DataStreamSender: Consumption=1.45 KB
>  Block Manager: Limit=16.00 GB Consumption=0
>  Memory Limit Exceeded
>  HDFS_SCAN_NODE (id=0) could not allocate 257.11 MB without exceeding limit.
>  Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB 
> Consumption=19.47 GB
>  Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.47 GB
>  HDFS_SCAN_NODE (id=0): Consumption=19.47 GB
>  DataStreamSender: Consumption=1.45 KB
>  Block Manager: Limit=16.00 GB Consumption=0
>  Memory Limit Exceeded
> 

[jira] [Resolved] (IMPALA-8129) Build failure: query_test/test_observability.py

2019-02-01 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8129.
-
   Resolution: Fixed
Fix Version/s: (was: Impala 3.1.0)
   Impala 3.2.0

> Build failure: query_test/test_observability.py
> ---
>
> Key: IMPALA-8129
> URL: https://issues.apache.org/jira/browse/IMPALA-8129
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Paul Rogers
>Assignee: Lars Volker
>Priority: Blocker
>  Labels: broken-build
> Fix For: Impala 3.2.0
>
>
> {{query_test/test_observability.py}} failed in the multiple builds:
> Erasure-coding build:
> {noformat}
> 18:49:01 === FAILURES 
> ===
> 18:49:01 ___ TestObservability.test_global_exchange_counters 
> 
> 18:49:01 [gw0] linux2 -- Python 2.7.5 
> /data/jenkins/workspace/impala-asf-master-core-erasure-coding/repos/Impala/bin/../infra/python/env/bin/python
> 18:49:01 query_test/test_observability.py:400: in 
> test_global_exchange_counters
> 18:49:01 assert "ExchangeScanRatio: 3.19" in profile
> 18:49:01 E   assert 'ExchangeScanRatio: 3.19' in 'Query 
> (id=704d1f6b09400fba:b91dc70):\n  DEBUG MODE WARNING: Query profile 
> created while running a DEBUG build...  - OptimizationTime: 32.000ms\n
>- PeakMemoryUsage: 220.00 KB (225280)\n   - PrepareTime: 
> 26.000ms\n'
> {noformat}
> Core build:
> {noformat}
> 07:36:43 FAIL 
> query_test/test_observability.py::TestObservability::()::test_global_exchange_counters
> 07:36:43 === FAILURES 
> ===
> 07:36:43 ___ TestObservability.test_global_exchange_counters 
> 
> 07:36:43 [gw2] linux2 -- Python 2.7.5 
> /data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/bin/../infra/python/env/bin/python
> 07:36:43 query_test/test_observability.py:400: in 
> test_global_exchange_counters
> 07:36:43 assert "ExchangeScanRatio: 3.19" in profile
> 07:36:43 E   assert 'ExchangeScanRatio: 3.19' in 'Query 
> (id=b546ddcfab65e431:471aa218):\n  DEBUG MODE WARNING: Query profile 
> created while running a DEBUG buil...  - OptimizationTime: 32.000ms\n 
>   - PeakMemoryUsage: 220.00 KB (225280)\n   - PrepareTime: 32.000ms\n'
> {noformat}
> Assigning to Lars since it may be related to the patch for IMPALA-7731: Add 
> Read/Exchange counters to profile



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8118) ASAN build failure: query_test/test_scanners.py

2019-02-01 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8118.
-
   Resolution: Duplicate
Fix Version/s: Impala 3.2.0

> ASAN build failure: query_test/test_scanners.py
> ---
>
> Key: IMPALA-8118
> URL: https://issues.apache.org/jira/browse/IMPALA-8118
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.1.0
>Reporter: Paul Rogers
>Assignee: Lars Volker
>Priority: Blocker
>  Labels: broken-build
> Fix For: Impala 3.2.0, Impala 3.1.0
>
>
> Build of latest master, with ASAN, failed with the following error, which to 
> my newbie eyes looks like a connection failure:
> {noformat}
> 05:42:04 === FAILURES 
> ===
> 05:42:04  TestQueriesTextTables.test_data_source_tables[protocol: beeswax | 
> exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> text/none] 
> 05:42:04 [gw5] linux2 -- Python 2.7.5 
> /data/jenkins/workspace/impala-cdh6.x-core-asan/repos/Impala/bin/../infra/python/env/bin/python
> 05:42:04 query_test/test_queries.py:174: in test_data_source_tables
> 05:42:04 self.run_test_case('QueryTest/data-source-tables', vector)
> 05:42:04 common/impala_test_suite.py:472: in run_test_case
> 05:42:04 result = self.__execute_query(target_impalad_client, query, 
> user=user)
> ...
> 05:42:04 handle = self.execute_query_async(query_string, user=user)
> 05:42:04 beeswax/impala_beeswax.py:351: in execute_query_async
> 05:42:04 handle = self.__do_rpc(lambda: self.imp_service.query(query,))
> 05:42:04 beeswax/impala_beeswax.py:512: in __do_rpc
> 05:42:04 raise ImpalaBeeswaxException(self.__build_error_message(e), e)
> 05:42:04 E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> 05:42:04 EINNER EXCEPTION:  'thrift.transport.TTransport.TTransportException'>
> 05:42:04 EMESSAGE: TSocket read 0 bytes
> 05:42:04 - Captured stderr call 
> -
> ...
> 05:42:04 -- executing against localhost:21000
> 05:42:04 select *
> 05:42:04 from alltypes_datasource
> 05:42:04 where float_col != 0 and
> 05:42:04   int_col >= 1990 limit 5;
> {noformat}
> A similar error appears for multiple other tests. Then:
> {noformat}
> 05:42:04 TTransportException: Could not connect to localhost:21050
> 05:42:04 !!! Interrupted: stopping after 10 failures 
> 
> {noformat}
> I wonder if these are just symptoms of a failure in the BE code due to ASAN 
> being enabled.
> Similar error in the latest build:
> {noformat}
> 05:20:05 === FAILURES 
> ===
> 05:20:05  TestHdfsQueries.test_hdfs_scan_node[protocol: beeswax | 
> exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> hbase/none] 
> 05:20:05 [gw4] linux2 -- Python 2.7.5 
> /data/jenkins/workspace/impala-cdh6.x-core-asan/repos/Impala/bin/../infra/python/env/bin/python
> 05:20:05 query_test/test_queries.py:240: in test_hdfs_scan_node
> 05:20:05 self.run_test_case('QueryTest/hdfs-scan-node', vector)
> ...
> 05:20:05 exec_result = self.__fetch_results(query_handle, max_rows)
> 05:20:05 beeswax/impala_beeswax.py:456: in __fetch_results
> 05:20:05 results = self.__do_rpc(lambda: self.imp_service.fetch(handle, 
> False, fetch_rows))
> 05:20:05 beeswax/impala_beeswax.py:512: in __do_rpc
> 05:20:05 raise ImpalaBeeswaxException(self.__build_error_message(e), e)
> 05:20:05 E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> 05:20:05 EINNER EXCEPTION:  'thrift.transport.TTransport.TTransportException'>
> 05:20:05 EMESSAGE: TSocket read 0 bytes
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8140) Grouping aggregation with limit breaks asan build

2019-02-01 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8140.
-
   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Grouping aggregation with limit breaks asan build
> -
>
> Key: IMPALA-8140
> URL: https://issues.apache.org/jira/browse/IMPALA-8140
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.1.0, Impala 3.2.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Blocker
>  Labels: asan, crash
> Fix For: Impala 3.2.0
>
>
> Commit 4af3a7853e9 for IMPALA-7333 breaks the following query on ASAN:
> {code:sql}
> select count(*) from tpch_parquet.orders o group by o.o_clerk limit 10;
> {code}
> {noformat}
> ==30219==ERROR: AddressSanitizer: use-after-poison on address 0x631000c4569c 
> at pc 0x020163cc bp 0x7f73a12a5700 sp 0x7f73a12a56f8
> READ of size 1 at 0x631000c4569c thread T276
> #0 0x20163cb in impala::Tuple::IsNull(impala::NullIndicatorOffset const&) 
> const /tmp/be/src/runtime/tuple.h:241:13
> #1 0x280c3d1 in 
> impala::AggFnEvaluator::SerializeOrFinalize(impala::Tuple*, 
> impala::SlotDescriptor const&, impala::Tuple*, void*) 
> /tmp/be/src/exprs/agg-fn-evaluator.cc:393:29
> #2 0x2777bc8 in 
> impala::AggFnEvaluator::Finalize(std::vector std::allocator > const&, impala::Tuple*, 
> impala::Tuple*) /tmp/be/src/exprs/agg-fn-evaluator.h:307:15
> #3 0x27add96 in 
> impala::GroupingAggregator::CleanupHashTbl(std::vector  std::allocator > const&, 
> impala::HashTable::Iterator) /tmp/be/src/exec/grouping-aggregator.cc:351:7
> #4 0x27ae2b2 in impala::GroupingAggregator::ClosePartitions() 
> /tmp/be/src/exec/grouping-aggregator.cc:930:5
> #5 0x27ae5f4 in impala::GroupingAggregator::Close(impala::RuntimeState*) 
> /tmp/be/src/exec/grouping-aggregator.cc:383:3
> #6 0x27637f7 in impala::AggregationNode::Close(impala::RuntimeState*) 
> /tmp/be/src/exec/aggregation-node.cc:139:32
> #7 0x206b7e9 in impala::FragmentInstanceState::Close() 
> /tmp/be/src/runtime/fragment-instance-state.cc:368:42
> #8 0x2066b1a in impala::FragmentInstanceState::Exec() 
> /tmp/be/src/runtime/fragment-instance-state.cc:99:3
> #9 0x2080e12 in 
> impala::QueryState::ExecFInstance(impala::FragmentInstanceState*) 
> /tmp/be/src/runtime/query-state.cc:584:24
> #10 0x1d79036 in boost::function0::operator()() const 
> /opt/Impala-Toolchain/boost-1.57.0-p3/include/boost/function/function_template.hpp:766:14
> #11 0x24bbe06 in impala::Thread::SuperviseThread(std::string const&, 
> std::string const&, boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*) 
> /tmp/be/src/util/thread.cc:359:3
> #12 0x24c72f8 in void boost::_bi::list5, 
> boost::_bi::value, boost::_bi::value >, 
> boost::_bi::value, 
> boost::_bi::value*> 
> >::operator() boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*), 
> boost::_bi::list0>(boost::_bi::type, void (*&)(std::string const&, 
> std::string const&, boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*), boost::_bi::list0&, int) 
> /opt/Impala-Toolchain/boost-1.57.0-p3/include/boost/bind/bind.hpp:525:9
> #13 0x24c714b in boost::_bi::bind_t std::string const&, boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*), 
> boost::_bi::list5, 
> boost::_bi::value, boost::_bi::value >, 
> boost::_bi::value, 
> boost::_bi::value*> > 
> >::operator()() 
> /opt/Impala-Toolchain/boost-1.57.0-p3/include/boost/bind/bind_template.hpp:20:16
> #14 0x3c83949 in thread_proxy 
> (/home/lv/i4/be/build/debug/service/impalad+0x3c83949)
> #15 0x7f768ce73183 in start_thread 
> /build/eglibc-ripdx6/eglibc-2.19/nptl/pthread_create.c:312
> #16 0x7f768c98a03c in clone 
> /build/eglibc-ripdx6/eglibc-2.19/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:111
> {noformat}
> The problem seems to be that we call 
> {{output_partition_->aggregated_row_stream->Close()}} in 
> be/src/exec/grouping-aggregator.cc:284 when hitting the limit, and then later 
> the tuple creation in {{CleanupHashTbl()}} in 
> be/src/exec/grouping-aggregator.cc:341 reads from poisoned memory.
> A similar query does not show the crash:
> {code:sql}
> select count(*) from functional_parquet.alltypes a group by a.string_col 
> limit 2;
> {code}
> [~tarmstrong] - Do you have an idea why the query on a much smaller dataset 
> wouldn't crash?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8152) Aggregate Commands on HBase Table Omit Null Values

2019-02-01 Thread Alan Jackoway (JIRA)
Alan Jackoway created IMPALA-8152:
-

 Summary: Aggregate Commands on HBase Table Omit Null Values
 Key: IMPALA-8152
 URL: https://issues.apache.org/jira/browse/IMPALA-8152
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 2.12.0
Reporter: Alan Jackoway


We have an HBase-backed impala table, which has a string column (for the 
purpose of this jira, {{sCol}})

There are records where that column is null, which we can observe with queries 
like {{select * from table where sCol is null limit 1}}

However, when we run these commands, we get bad results:
{code:sql}
-- Returns 0
select count(*) from table where sCol is null;
-- Returns only rows for string values (we only have a few options in this 
case), no row for null
select sCol, count(*) from table group by sCol
{code}

These commands work as expected on parquet-backed tables. They also do not work 
in Hive, where I will file a jira shortly.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-7895) Incorrect expected results for spillable-buffer-sizing.test

2019-02-01 Thread Tim Armstrong (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-7895.
---
   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Incorrect expected results for spillable-buffer-sizing.test
> ---
>
> Key: IMPALA-7895
> URL: https://issues.apache.org/jira/browse/IMPALA-7895
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 3.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
> Fix For: Impala 3.2.0
>
>
> A recent change appears to have caused a test to expect the wrong rewritten 
> SQL in {{spillable-buffer-sizing.test}}.
> {noformat}
> # Mid NDV aggregation - should scale down buffers to intermediate size.
> select straight_join l_orderkey, o_orderstatus, count(*)
> from tpch_parquet.lineitem
> join tpch_parquet.orders on o_orderkey = l_orderkey
> group by 1, 2
> having count(*) = 1
>  DISTRIBUTEDPLAN
> Max Per-Host Resource Reservation: Memory=82.00MB Threads=7
> Per-Host Resource Estimates: Memory=244MB
> Analyzed query: SELECT 
> -- +straight_join
> l_orderkey, o_orderstatus, count(*) FROM tpch_parquet.lineitem INNER JOIN
> tpch_parquet.orders ON o_orderkey = l_orderkey GROUP BY CAST(1 AS 
> INVALID_TYPE),
> CAST(2 AS INVALID_TYPE) HAVING count(*) = CAST(1 AS BIGINT)
> {noformat}
> Correct rewritten SQL:
> {noformat}
> Analyzed query: SELECT 
> -- +straight_join
> l_orderkey, o_orderstatus, count(*) FROM tpch_parquet.lineitem INNER JOIN
> tpch_parquet.orders ON o_orderkey = l_orderkey GROUP BY l_orderkey,
> o_orderstatus HAVING count(*) = CAST(1 AS BIGINT)
> {noformat}
> The same problem occurs in {{max-rows-test.test}}.
> The problem is due to the existence of two copies of the grouping 
> expressions. The {{toSql()}} function used the original, unanalyzed copy, not 
> the rewritten copy with ordinal replacements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-6098) core on parquet select

2019-02-01 Thread Tim Armstrong (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-6098.
---
Resolution: Cannot Reproduce

> core on parquet select
> --
>
> Key: IMPALA-6098
> URL: https://issues.apache.org/jira/browse/IMPALA-6098
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 2.9.0
> Environment: Version
> catalogd version 2.9.0-cdh5.12.0 RELEASE (build 
> 03c6ddbdcec39238be4f5b14a300d5c4f576097e)
> Built on Thu Jun 29 04:17:31 PDT 2017
> Hardware Info
> Cpu Info:
>   Model: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
>   Cores: 24
>   Max Possible Cores: 24
>   L1 Cache: 32.00 KB (Line: 64.00 B)
>   L2 Cache: 256.00 KB (Line: 64.00 B)
>   L3 Cache: 15.00 MB (Line: 64.00 B)
>   Hardware Supports:
> ssse3
> sse4_1
> sse4_2
> popcnt
> avx
> avx2
>   Numa Nodes: 2
>   Numa Nodes of Cores: 0->0 | 1->0 | 2->0 | 3->0 | 4->0 | 5->0 | 6->1 | 7->1 
> | 8->1 | 9->1 | 10->1 | 11->1 | 12->0 | 13->0 | 14->0 | 15->0 | 16->0 | 17->0 
> | 18->1 | 19->1 | 20->1 | 21->1 | 22->1 | 23->1 |
>  Physical Memory: 62.28 GB
>  Disk Info: 
>   Num disks 13: 
> sda (rotational=false)
> sdb (rotational=true)
> sdc (rotational=true)
> sdd (rotational=true)
> sde (rotational=true)
> sdk (rotational=true)
> sdf (rotational=true)
> sdl (rotational=true)
> sdm (rotational=true)
> sdg (rotational=true)
> sdi (rotational=true)
> sdj (rotational=true)
> sdh (rotational=true)
> OS Info
> OS version: Linux version 3.10.104-1-tlinux2-0041.tl1 (r...@te64.site) (gcc 
> version 4.4.6 20110731 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Fri Oct 28 20:36:06 
> CST 2016
> Clock: clocksource: 'tsc', clockid_t: CLOCK_MONOTONIC
>Reporter: sw
>Priority: Major
>
> i create table like this:
> {code:java}
> "CREATE EXTERNAL TABLE fact_vm_widetable  LIKE PARQUET  
> '/user/spark/parquet-vm/part-0-69d62acd-92a4-4774-ae6c-71be5c2dfcd0-c000.snappy.parquet'
> STORED AS PARQUET
> LOCATION '/user/spark/parquet-vm';"
> {code}
> then: select count(1) from fact_host_widetable  , all Impala Daemon will core.
> got info from core:
> (gdb) bt
> #0  0x7f64bf6f3625 in raise () from /lib64/libc.so.6
> #1  0x7f64bf6f4e05 in abort () from /lib64/libc.so.6
> #2  0x7f64c016c07d in __gnu_cxx::__verbose_terminate_handler() () from 
> /opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/impala/lib/libstdc++.so.6
> #3  0x7f64c016a0e6 in ?? () from 
> /opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/impala/lib/libstdc++.so.6
> #4  0x7f64c016a131 in std::terminate() () from 
> /opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/impala/lib/libstdc++.so.6
> #5  0x7f64c016a348 in __cxa_throw () from 
> /opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/impala/lib/libstdc++.so.6
> #6  0x7f64c01c5976 in std::__throw_runtime_error(char const*) () from 
> /opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/impala/lib/libstdc++.so.6
> #7  0x7f64c018cac4 in 
> std::locale::facet::_S_create_c_locale(__locale_struct*&, char const*, 
> __locale_struct*) ()
>from 
> /opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/impala/lib/libstdc++.so.6
> #8  0x7f64c0181f69 in std::locale::_Impl::_Impl(char const*, unsigned 
> long) () from 
> /opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/impala/lib/libstdc++.so.6
> #9  0x7f64c0183192 in std::locale::locale(char const*) () from 
> /opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/impala/lib/libstdc++.so.6
> #10 0x00e81de3 in boost::filesystem::path::codecvt() ()
> #11 0x00c6f8f2 in 
> impala::HdfsScanNodeBase::Prepare(impala::RuntimeState*) ()
> #12 0x00c67ce9 in 
> impala::HdfsScanNode::Prepare(impala::RuntimeState*) ()
> #13 0x00c50bf4 in impala::ExecNode::Prepare(impala::RuntimeState*) ()
> #14 0x00cf0037 in 
> impala::PartitionedAggregationNode::Prepare(impala::RuntimeState*) ()
> #15 0x00a7efcd in impala::FragmentInstanceState::Prepare() ()
> #16 0x00a7fb71 in impala::FragmentInstanceState::Exec() ()
> #17 0x00a6bab6 in 
> impala::QueryState::ExecFInstance(impala::FragmentInstanceState*) ()
> #18 0x00bf0ac9 in 
> impala::Thread::SuperviseThread(std::basic_string std::char_traits, std::allocator > const&, 
> std::basic_string, std::allocator > 
> const&, boost::function, impala::Promise*) ()
> #19 0x00bf1484 in boost::detail::thread_data void (*)(std::basic_string, std::allocator 
> > const&, std::basic_string, 
> std::allocator > const&, boost::function, 
> impala::Promise*), 
> boost::_bi::list4 std::char_traits, std::allocator > >, 
> boost::_bi::value, 
> std::allocator > >, boost::_bi::value >, 
> boost::_bi::value*> > > >::run() ()
> #20 0x00e592ea in ??

[jira] [Resolved] (IMPALA-3631) Investigate why Decimal to Timestamp casting became slower

2019-02-01 Thread Tim Armstrong (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-3631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-3631.
---
Resolution: Later

I don't think this is really a priority now - we should probably just treat 
this the same as other performance improvements and wait until we have some 
evidence that it's important to improve.

> Investigate why Decimal to Timestamp casting became slower
> --
>
> Key: IMPALA-3631
> URL: https://issues.apache.org/jira/browse/IMPALA-3631
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 2.6.0
>Reporter: Taras Bobrovytsky
>Priority: Minor
>  Labels: performance
>
> https://issues.cloudera.org/browse/IMPALA-3163 fixes the correctness issue 
> with Decimal to Timestamp casting, but worsens the performance by about 30%. 
> We want to understand why this happens and possibly fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-6211) Query state shows FINISHED in webUI/25000/queries page while it shows CREATED in profile

2019-02-01 Thread Tim Armstrong (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-6211.
---
Resolution: Duplicate

> Query state shows FINISHED in webUI/25000/queries page while it shows CREATED 
> in profile
> 
>
> Key: IMPALA-6211
> URL: https://issues.apache.org/jira/browse/IMPALA-6211
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Mala Chikka Kempanna
>Priority: Major
> Attachments: Profile-query-state.png, webUI-query-state.png
>
>
> A query run from HUE shows inconsisten state in impala webUI and in profile.
> On impala debug web UI 25000/queries, the query state is shown FINISHED, but 
> in profile query state shows CREATED.
> These two states need to be in sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-3831) Broken links in Impala debug webpage when query is starting up

2019-02-01 Thread Tim Armstrong (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-3831.
---
Resolution: Cannot Reproduce

> Broken links in Impala debug webpage when query is starting up
> --
>
> Key: IMPALA-3831
> URL: https://issues.apache.org/jira/browse/IMPALA-3831
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 2.7.0
>Reporter: Tim Armstrong
>Priority: Minor
>  Labels: debugging, ramp-up, supportability
> Attachments: impala-debug-webpage-invalid-query-id.png
>
>
> When you click on a query in the Impala debug webpage, I think before it 
> finishes planning, you get a blank plan webpage and a "Error: Invalid query 
> id: ae41282f79f47d9b:c117ce848f4ae9bd" if you click on the link from the 
> "queries" page. 
> This is ok in itself, but all of the links to other query pages are broken 
> and missing a query_id. E.g. 
> "http://tarmstrong-box.ca.cloudera.com:25000/query_summary?query_id="; This is 
> annoying because you have to refresh until the query finishes planning, then 
> click through to the other query pages. We already know the query id, so we 
> should be able to generate the correct links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8153) Impala Doc: Add a section on Admission Debug page to Web UI doc

2019-02-01 Thread Alex Rodoni (JIRA)
Alex Rodoni created IMPALA-8153:
---

 Summary: Impala Doc: Add a section on Admission Debug page to Web 
UI doc
 Key: IMPALA-8153
 URL: https://issues.apache.org/jira/browse/IMPALA-8153
 Project: IMPALA
  Issue Type: Sub-task
  Components: Docs
Affects Versions: Impala 3.2.0
Reporter: Alex Rodoni
Assignee: Alex Rodoni






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (IMPALA-8137) Order by docs incorrectly state that order by happens on one node

2019-02-01 Thread Alex Rodoni (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rodoni closed IMPALA-8137.
---
   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Order by docs incorrectly state that order by happens on one node
> -
>
> Key: IMPALA-8137
> URL: https://issues.apache.org/jira/browse/IMPALA-8137
> Project: IMPALA
>  Issue Type: Bug
>  Components: Docs
>Reporter: Tim Armstrong
>Assignee: Alex Rodoni
>Priority: Major
> Fix For: Impala 3.2.0
>
>
> https://impala.apache.org/docs/build/html/topics/impala_order_by.html
> "because the entire result set must be produced and transferred to one node 
> before the sorting can happen." is incorrect. If there is an "ORDER BY" 
> clause in a select block, then first data is sorted locally by each impala 
> daemon, then streamed to the coordinator, which merges the sorted result sets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8154) Disable auth_to_local by default

2019-02-01 Thread Michael Ho (JIRA)
Michael Ho created IMPALA-8154:
--

 Summary: Disable auth_to_local by default
 Key: IMPALA-8154
 URL: https://issues.apache.org/jira/browse/IMPALA-8154
 Project: IMPALA
  Issue Type: Bug
  Components: Distributed Exec
Affects Versions: Impala 3.1.0, Impala 2.12.0
Reporter: Michael Ho
Assignee: Michael Ho


Before KRPC the local name mapping was done from the principal name entirely, 
however when KRPC is enabled Impala starts to use the system auth_to_local 
rules, "use_system_auth_to_local" is enabled by default. This can cause 
regression in cases where localauth is configured in the krb5.conf. This may 
cause issue for connection between Impalad after [this 
commit|https://github.com/apache/impala/commit/5c541b960491ba91533712144599fb3b6d99521d]

The fix is to disable use_system_auth_to_local by default.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8155) Switch to Impala-lzo/2.x for Impala-2.x

2019-02-01 Thread Quanlong Huang (JIRA)
Quanlong Huang created IMPALA-8155:
--

 Summary: Switch to Impala-lzo/2.x for Impala-2.x
 Key: IMPALA-8155
 URL: https://issues.apache.org/jira/browse/IMPALA-8155
 Project: IMPALA
  Issue Type: Task
Reporter: Quanlong Huang


Impala-2.x is currently based on Cloudera/Impala-lzo master branch. As it 
updated, builds of Impala-2.x will fail. We need to switch to another branch 
that points to the original commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8156) Add format options to the EXPLAIN statement

2019-02-01 Thread Paul Rogers (JIRA)
Paul Rogers created IMPALA-8156:
---

 Summary: Add format options to the EXPLAIN statement
 Key: IMPALA-8156
 URL: https://issues.apache.org/jira/browse/IMPALA-8156
 Project: IMPALA
  Issue Type: Improvement
  Components: Frontend
Affects Versions: Impala 3.1.0
Reporter: Paul Rogers
Assignee: Paul Rogers


The EXPLAIN statement is very basic:

{code:sql}
EXPLAIN ;
{code}

Example:

{code:sql}
EXPLAIN SELECT * FROM alltypes;
{code}

Explain does provide some options set as session options:

{code:sql}
SET set explain_level=extended;
EXPLAIN ;
{code}

We have often found the need for additional information. For example, it would 
be very useful to obtain the SELECT statement after view substitution.

We wish to extend EXPLAIN to allow additional options, while retaining full 
backward compatibility. The extended syntax is:

{code:sql}
EXPLAIN [FORMAT([opt(, opt)*])] ;
{code}

This syntax reuses the existing FORMAT keyword, and allow an unlimited set of 
options to be added in the future without the need to define new keywords.

Options are in the {{name=value}} form with {{name}} as an identifier and 
{{value}} as a string literal. Both are case-insensitive. Example to set the 
explain level:

{code:sql}
EXPLAIN FORMAT(level=extended) SELECT * FROM alltypes;
{code}

The two options supported at present are:

* {{level}} - Sets the explain level.
* {{rewritten}} - Shows the fully rewritten SQL statement with views expanded.

The {{level}} option overrides the existing session options. If {{level}} is 
not present, then the session option is used instead.

The {{rewritten}} option takes two values: {{true}} or {{false}}. If set, 
{{EXPLAIN}} returns the text of the rewritten SQL instead of the query plan. 
Example:

{noformat}
functional> explain format(rewritten) SELECT * FROM view_view;

++
| Explain String |
++
| SELECT * FROM /* functional.view_view */ ( |
| SELECT * FROM /* functional.alltypes_view */ ( |
| SELECT * FROM functional.alltypes) |
| )  |
++
{noformat}

Here, the names in comments are the view names. Views are then expanded inline 
to show the full extend of the statement. This is very helpful to resolve user 
issues.

h4. Comparison with Other SQL Dialects

The ISO SQL standard does not define the {{EXPLAIN}} statement, it is a vendor 
extension. MySQL defines {{EXPLAIN}} as:

{noformat}
{EXPLAIN | DESCRIBE | DESC}
[explain_type]
{explainable_stmt | FOR CONNECTION connection_id}

explain_type: {
FORMAT = format_name
}

format_name: {
TRADITIONAL
  | JSON
}
{noformat}

That is, MySQL also uses the {{FORMAT}} keyword with only two choices.

SqlServer uses a form much like Impala's present form with no options.

Postgres uses options and keywords:

{noformat}
EXPLAIN [ ( option [, ...] ) ] statement
EXPLAIN [ ANALYZE ] [ VERBOSE ] statement

where option can be one of:

ANALYZE [ boolean ]
VERBOSE [ boolean ]
COSTS [ boolean ]
BUFFERS [ boolean ]
FORMAT { TEXT | XML | JSON | YAML }
{noformat}

Apache Drill uses a series of keywords to express options:

{noformat}
explain plan [ including all attributes ]
 [ with implementation | without implementation ]
 for  ;
{noformat}

We claim that, given the wide variety of vendor implementations, the proposed 
Impala syntax is reasonable.

h4. Futures

IMPALA-5973 proposes to add a JSON format for {{EXPLAIN}} output. We propose to 
select JSON output using the "format" option:

{code:sql}
EXPLAIN FORMAT(format='json') 
{code}

The format can be combined other options such as level:

{code:sql}
EXPLAIN FORMAT(format='json', level='extended') 
{code}

h4. Details

The key/value syntax is very general, but cumbersome for simple tasks. The 
{{FORMAT}} option allows a number of simplifications.

First, for the explain level, each level can be used as a Boolean option:

{code:sql}
EXPLAIN FORMAT(extended='true') 
{code}

Second, for Boolean options, the value is optional and "true" is assumed:

{code:sql}
EXPLAIN FORMAT(EXTENDED) 
{code}

Third, if only a value is given, the value is assumed to be for the "format" 
key (which is not yet supported):

{code:sql}
EXPLAIN FORMAT('json') 
{code}

Would, when JSON format is available, emit the plan as JSON.

The astute reader will see opportunities for odd combinations of options. 
Rather than enforcing a strict set of rules, when given an odd set of rules, 
the {{FORMAT}} option simply does something reasonable. Example:

{code:sql}
EXPLAIN FORMAT(level='standard', extended, verbose='false') 
{code}

The short answer here is that options are ambiguous, behavior is undefined, but 
reasonable.




[jira] [Created] (IMPALA-8157) Log exceptions from the FrontEnd

2019-02-01 Thread Paul Rogers (JIRA)
Paul Rogers created IMPALA-8157:
---

 Summary: Log exceptions from the FrontEnd
 Key: IMPALA-8157
 URL: https://issues.apache.org/jira/browse/IMPALA-8157
 Project: IMPALA
  Issue Type: Improvement
  Components: Frontend
Affects Versions: Impala 3.1.0
Reporter: Paul Rogers
Assignee: Fang-Yu Rao


The BE calls into the FE for a variety of operations. Each of these may fail in 
expected ways (invalid query, say) or unexpected ways (a code change introduces 
a null pointer exception.)

At present, the BE logs only the exception, and only at the INFO level. This 
ticket asks to log all unexpected exceptions at the ERROR level. The basic idea 
is to extend all FE entry points to do:

{code:java}
try {
  // Do the operation
} catch (ExpectedException e) {
  // Don't log expected exceptions
  throw e;
} catch (Throwable e) {
  LOG.error("Something went wrong", e);
  throw e;
}
{code}

The above code logs all exceptions except for those that are considered 
expected. The job of this ticket is to:

* Find all the entry points
* Identify which, if any, exceptions are expected
* Add logging code with an error message that identifies the operation

This pattern was tested ad-hoc to find a bug during development and seems to 
work fine. As. a result, the change is mostly a matter of the above three steps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IMPALA-8142) ASAN build failure in query_test/test_nested_types.py

2019-02-01 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8142.
-
   Resolution: Duplicate
 Assignee: Lars Volker  (was: Lenisha Gandhi)
Fix Version/s: (was: Impala 3.1.0)
   Impala 3.2.0

> ASAN build failure in query_test/test_nested_types.py
> -
>
> Key: IMPALA-8142
> URL: https://issues.apache.org/jira/browse/IMPALA-8142
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 3.1.0, Impala 3.2.0
>Reporter: Paul Rogers
>Assignee: Lars Volker
>Priority: Blocker
>  Labels: asan, build-failure
> Fix For: Impala 3.2.0
>
>
> From the build log:
> {noformat}
> 05:23:33 === FAILURES 
> ===
> 05:23:33  TestNestedTypes.test_subplan[protocol: beeswax | exec_option: 
> {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
> 'disable_codegen': False, 'abort_on_error': 1, 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none] 
> 05:23:33 [gw7] linux2 -- Python 2.7.5 
> /data/jenkins/workspace/impala-cdh6.x-core-asan/repos/Impala/bin/../infra/python/env/bin/python
> 05:23:33 query_test/test_nested_types.py:77: in test_subplan
> 05:23:33 self.run_test_case('QueryTest/nested-types-subplan', vector)
> 05:23:33 common/impala_test_suite.py:472: in run_test_case
> 05:23:33 result = self.__execute_query(target_impalad_client, query, 
> user=user)
> 05:23:33 common/impala_test_suite.py:699: in __execute_query
> 05:23:33 return impalad_client.execute(query, user=user)
> 05:23:33 common/impala_connection.py:174: in execute
> 05:23:33 return self.__beeswax_client.execute(sql_stmt, user=user)
> 05:23:33 beeswax/impala_beeswax.py:200: in execute
> 05:23:33 result = self.fetch_results(query_string, handle)
> 05:23:33 beeswax/impala_beeswax.py:445: in fetch_results
> 05:23:33 exec_result = self.__fetch_results(query_handle, max_rows)
> 05:23:33 beeswax/impala_beeswax.py:456: in __fetch_results
> 05:23:33 results = self.__do_rpc(lambda: self.imp_service.fetch(handle, 
> False, fetch_rows))
> 05:23:33 beeswax/impala_beeswax.py:512: in __do_rpc
> 05:23:33 raise ImpalaBeeswaxException(self.__build_error_message(e), e)
> 05:23:33 E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> 05:23:33 EINNER EXCEPTION:  'thrift.transport.TTransport.TTransportException'>
> 05:23:33 EMESSAGE: TSocket read 0 bytes
> {noformat}
> From {{impalad.ERROR}}:
> {noformat}
> SUMMARY: AddressSanitizer: use-after-poison 
> /data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/be/src/runtime/tuple.h:241:13
>  in impala::Tuple::IsNull(impala::NullIndicatorOffset const&) const
> ...
> ==119152==ABORTING
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8158) Use HS2 service to retrieve thrift profiles

2019-02-01 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8158:
---

 Summary: Use HS2 service to retrieve thrift profiles
 Key: IMPALA-8158
 URL: https://issues.apache.org/jira/browse/IMPALA-8158
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 3.2.0
Reporter: Lars Volker
Assignee: Lars Volker


Once Impyla has been updated, we should retrieve Thrift profiles through HS2 
synchronously instead of scraping the debug web pages.

https://github.com/cloudera/impyla/issues/332



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)