[jira] [Created] (IMPALA-6201) TestRuntimeFilters.test_basic_filters fails on ASAN

2017-11-15 Thread Taras Bobrovytsky (JIRA)
Taras Bobrovytsky created IMPALA-6201:
-

 Summary: TestRuntimeFilters.test_basic_filters fails on ASAN
 Key: IMPALA-6201
 URL: https://issues.apache.org/jira/browse/IMPALA-6201
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 2.11.0
Reporter: Taras Bobrovytsky
Priority: Blocker


{code}

{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IMPALA-6200) Flakiness in Planner Tests

2017-11-15 Thread Taras Bobrovytsky (JIRA)
Taras Bobrovytsky created IMPALA-6200:
-

 Summary: Flakiness in Planner Tests
 Key: IMPALA-6200
 URL: https://issues.apache.org/jira/browse/IMPALA-6200
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 2.11.0
Reporter: Taras Bobrovytsky
Priority: Critical


Sometimes we are seeing random small variations in Planner tests, which causes 
builds to be flaky.

Actual:
{code}
F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
|  Per-Host Resources: mem-estimate=48.00MB mem-reservation=0B
^^
WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(CAST(3 + 
year AS INT),CAST(month - -1 AS INT))]
|  partitions=4
|  mem-estimate=1.56KB mem-reservation=0B
{code}

Expected:
{code}
F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
|  Per-Host Resources: mem-estimate=32.00MB mem-reservation=0B
WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(CAST(3 + 
year AS INT),CAST(month - -1 AS INT))]
|  partitions=4
|  mem-estimate=1.56KB mem-reservation=0B
{code}

Actual:
{code}
|  F01:PLAN FRAGMENT [RANDOM] hosts=2 instances=2
^
|  Per-Host Resources: mem-estimate=16.00MB mem-reservation=0B
|  01:SCAN HDFS [functional_parquet.alltypestiny, RANDOM]
| partitions=4/4 files=4 size=9.75KB
| stats-rows=unavailable extrapolated-rows=disabled
| table stats: rows=unavailable size=unavailable
| column stats: unavailable
| mem-estimate=16.00MB mem-reservation=0B
| tuple-ids=1 row-size=88B cardinality=unavailable
{code}

Expected:
{code}
|  F01:PLAN FRAGMENT [RANDOM] hosts=3 instances=3
|  Per-Host Resources: mem-estimate=16.00MB mem-reservation=0B
|  01:SCAN HDFS [functional_parquet.alltypestiny, RANDOM]
| partitions=4/4 files=4 size=10.48KB
| stats-rows=unavailable extrapolated-rows=disabled
| table stats: rows=unavailable size=unavailable
| column stats: unavailable
| mem-estimate=16.00MB mem-reservation=0B
| tuple-ids=1 row-size=88B cardinality=unavailable
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IMPALA-6199) Flaky test: metadata/test_hdfs_permissions.py

2017-11-15 Thread Taras Bobrovytsky (JIRA)
Taras Bobrovytsky created IMPALA-6199:
-

 Summary: Flaky test: metadata/test_hdfs_permissions.py
 Key: IMPALA-6199
 URL: https://issues.apache.org/jira/browse/IMPALA-6199
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 2.11.0
Reporter: Taras Bobrovytsky
Priority: Critical


TestHdfsPermissions.test_insert_into_read_only_table failed on a nightly Isilon 
build with the following error message:
{code}
 TestHdfsPermissions.test_insert_into_read_only_table[exec_option: 
{'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 
'disable_codegen': False, 'abort_on_error': 1, 
'exec_single_node_rows_threshold': 0} | table_format: text/none] 
[gw1] linux2 -- Python 2.6.6 
/data/jenkins/workspace/impala-umbrella-build-and-test-isilon/repos/Impala/bin/../infra/python/env/bin/python
metadata/test_hdfs_permissions.py:73: in test_insert_into_read_only_table
self.hdfs_client.delete_file_dir('test-warehouse/%s' % TEST_TBL, 
recursive=True)
util/hdfs_util.py:90: in delete_file_dir
if not self.exists(path):
util/hdfs_util.py:138: in exists
self.get_file_dir_status(path)
util/hdfs_util.py:102: in get_file_dir_status
return super(PyWebHdfsClientWithChmod, self).get_file_dir_status(path)
../infra/python/env/lib/python2.6/site-packages/pywebhdfs/webhdfs.py:338: in 
get_file_dir_status
_raise_pywebhdfs_exception(response.status_code, response.content)
../infra/python/env/lib/python2.6/site-packages/pywebhdfs/webhdfs.py:477: in 
_raise_pywebhdfs_exception
raise errors.PyWebHdfsException(msg=message)
E   PyWebHdfsException: 
E   
E   403 Forbidden
E   
E   Forbidden
E   You don't have permission to access /v1/html/500Text.html
E   on this server.
E   
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IMPALA-6197) Instrument our locks to track contention

2017-11-15 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-6197:
---

 Summary: Instrument our locks to track contention
 Key: IMPALA-6197
 URL: https://issues.apache.org/jira/browse/IMPALA-6197
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Reporter: Lars Volker


We should add instrumentation to our locks (or switch to an alternative like 
[absl::mutex|https://abseil.io/about/design/mutex]) to enable profiling of lock 
contention (see 
[absl::RegisterMutexProfiler()|https://github.com/abseil/abseil-cpp/blob/master/absl/synchronization/mutex.h#L933].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IMPALA-6196) Add per-remote-host RPC metrics to debug pages

2017-11-15 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-6196:
---

 Summary: Add per-remote-host RPC metrics to debug pages
 Key: IMPALA-6196
 URL: https://issues.apache.org/jira/browse/IMPALA-6196
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend, Distributed Exec
Reporter: Lars Volker


We should track RPC statistics (latency, throughput) per remote host on each 
backend of a cluster. That can help identify network contention and 
disruptions. It could also be used during planning.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)