[jira] [Updated] (HIVE-16311) Improve the performance for FastHiveDecimalImpl.fastDivide
[ https://issues.apache.org/jira/browse/HIVE-16311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Ma updated HIVE-16311: Attachment: HIVE-16311.006.patch > Improve the performance for FastHiveDecimalImpl.fastDivide > -- > > Key: HIVE-16311 > URL: https://issues.apache.org/jira/browse/HIVE-16311 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.2.0 >Reporter: Colin Ma >Assignee: Colin Ma > Fix For: 3.0.0 > > Attachments: HIVE-16311.001.patch, HIVE-16311.002.patch, > HIVE-16311.003.patch, HIVE-16311.004.patch, HIVE-16311.005.patch, > HIVE-16311.006.patch, HIVE-16311.withTrailingZero.patch > > > FastHiveDecimalImpl.fastDivide is poor performance when evaluate the > expression as 12345.67/123.45 > There are 2 points can be improved: > 1. Don't always use HiveDecimal.MAX_SCALE as scale when do the > BigDecimal.divide. > 2. Get the precision for BigInteger in a fast way if possible. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16311) Improve the performance for FastHiveDecimalImpl.fastDivide
[ https://issues.apache.org/jira/browse/HIVE-16311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965421#comment-15965421 ] Colin Ma commented on HIVE-16311: - [~mmccline], [~xuefuz], I just found the unnecessary code in FastHiveDecimalImpl.fastDivide() which cause the poor performance. BigDecimal.stripTrailingZeros() is slow and unnecessary, because fastTrailingDecimalZeroCount() and doFastScaleDown() will do the same thing, and faster than BigDecimal.stripTrailingZeros(). You can refer the patch for the detail, here is the [Review board link|https://reviews.apache.org/r/58377] for easy review. The following the micro benchmark for the patch, every expression are calculated 50 times: ||expression||without patch(s)||with patch(s)||improvement|| |15 / 3|1.78|0.43|75.84%| |0.001 / 810|0.56|0.33|41.07%| |1 / 3|0.74|0.36|41.35%| |1000 / 10|2.4|0.6|75%| |123456789000123456789 / 1234567891|1.21|0.66|45.45%| |123450001234501234.567 / 123.45|1.73|0.94|45.66%| |3.140 / 1.00|1.84|0.45|75.54%| |31401234567 / 112.3|0.9|0.53|41.11%| |12345612345678901234561234567890123456 / 987654321|1.7|0.96|43.53%| |12345612345678901234561234567890123456 / 9876543210123456|1.63|1|38.65%| |0.00123456 / 0.098765|0.68|0.39|42.65%| |0.0088 / 1000|1.32|0.25|81.06%| |0.0088 / 9876543210123456|0.28|0.18|35.71%| The expressions like *3.140 / 1.00, 15 / 3* have many trailing zeros, so they get much improvement from patch. For other expressions, they get precision from *precision = bigDecimal.precision()* instead of *precision = bigInteger.toString().length()*, and also get about 40% improvement. The following is the benchmark with q06 of TPCx-BB: The cluster includes 6 nodes, 128G memory/per node, CPU is Intel(R) Xeon(R) E5-2680, 1G network, with the 1T data scale and spark as executor engine. || ||without patch||with patch||improvement|| |disable vectorization|214s|178s|16.82%| |enable vectorization(Parquet file format)|252s|140s|44.44%| > Improve the performance for FastHiveDecimalImpl.fastDivide > -- > > Key: HIVE-16311 > URL: https://issues.apache.org/jira/browse/HIVE-16311 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.2.0 >Reporter: Colin Ma >Assignee: Colin Ma > Fix For: 3.0.0 > > Attachments: HIVE-16311.001.patch, HIVE-16311.002.patch, > HIVE-16311.003.patch, HIVE-16311.004.patch, HIVE-16311.005.patch, > HIVE-16311.006.patch, HIVE-16311.withTrailingZero.patch > > > FastHiveDecimalImpl.fastDivide is poor performance when evaluate the > expression as 12345.67/123.45 > There are 2 points can be improved: > 1. Don't always use HiveDecimal.MAX_SCALE as scale when do the > BigDecimal.divide. > 2. Get the precision for BigInteger in a fast way if possible. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16387) Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
[ https://issues.apache.org/jira/browse/HIVE-16387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965412#comment-15965412 ] Hive QA commented on HIVE-16387: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862964/HIVE-16387.04.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10570 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=143) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4651/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4651/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4651/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862964 - PreCommit-HIVE-Build > Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > --- > > Key: HIVE-16387 > URL: https://issues.apache.org/jira/browse/HIVE-16387 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-16387.01.patch, HIVE-16387.02.patch, > HIVE-16387.03.patch, HIVE-16387.04.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16344) Test and support replication of exchange partition
[ https://issues.apache.org/jira/browse/HIVE-16344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushanth Sowmyan updated HIVE-16344: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) > Test and support replication of exchange partition > -- > > Key: HIVE-16344 > URL: https://issues.apache.org/jira/browse/HIVE-16344 > Project: Hive > Issue Type: Sub-task > Components: repl >Affects Versions: 2.1.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DR > Fix For: 3.0.0 > > Attachments: HIVE-16344.01.patch > > > The Exchange (Move) partition operation should be replicated. > -- Move partition from src_table to dest_table > ALTER TABLE dest_table EXCHANGE PARTITION (partition_spec) WITH TABLE > src_table; > -- multiple partitions > ALTER TABLE dest_table EXCHANGE PARTITION (partial_partition_spec) WITH TABLE > src_table; > Already, logging the Exchange Partition as ADD_PARTITION event on destination > table and DROP_PARTITION event on source table. > Need to check the behaviour and then also add a test to verify the same. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-11133) Support hive.explain.user for Spark [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-11133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-11133: Attachment: HIVE-11133.3.patch > Support hive.explain.user for Spark [Spark Branch] > -- > > Key: HIVE-11133 > URL: https://issues.apache.org/jira/browse/HIVE-11133 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Mohit Sabharwal >Assignee: Sahil Takiar > Attachments: HIVE-11133.1.patch, HIVE-11133.2.patch, > HIVE-11133.3.patch > > > User friendly explain output ({{set hive.explain.user=true}}) should support > Spark as well. > Once supported, we should also enable related q-tests like {{explainuser_1.q}} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-12614) RESET command does not close spark session
[ https://issues.apache.org/jira/browse/HIVE-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965365#comment-15965365 ] Sahil Takiar commented on HIVE-12614: - [~nemon] could you take a look? RB: https://reviews.apache.org/r/58376/ > RESET command does not close spark session > -- > > Key: HIVE-12614 > URL: https://issues.apache.org/jira/browse/HIVE-12614 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 1.3.0, 2.1.0 >Reporter: Nemon Lou >Assignee: Sahil Takiar >Priority: Minor > Attachments: HIVE-12614.1.patch, HIVE-12614.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16422) Should kill running Spark Jobs when a query is cancelled.
[ https://issues.apache.org/jira/browse/HIVE-16422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965364#comment-15965364 ] Hive QA commented on HIVE-16422: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862962/HIVE-16422.000.txt {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10570 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.hbase.TestHBaseMetastoreSql.partitionedTable (batchId=201) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4650/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4650/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4650/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862962 - PreCommit-HIVE-Build > Should kill running Spark Jobs when a query is cancelled. > - > > Key: HIVE-16422 > URL: https://issues.apache.org/jira/browse/HIVE-16422 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 2.1.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-16422.000.txt > > > Should kill running Spark Jobs when a query is cancelled. When a query is > cancelled, Driver.releaseDriverContext will be called by Driver.close. > releaseDriverContext will call DriverContext.shutdown which will call all the > running tasks' shutdown. > {code} > public synchronized void shutdown() { > LOG.debug("Shutting down query " + ctx.getCmd()); > shutdown = true; > for (TaskRunner runner : running) { > if (runner.isRunning()) { > Task task = runner.getTask(); > LOG.warn("Shutting down task : " + task); > try { > task.shutdown(); > } catch (Exception e) { > console.printError("Exception on shutting down task " + > task.getId() + ": " + e); > } > Thread thread = runner.getRunner(); > if (thread != null) { > thread.interrupt(); > } > } > } > running.clear(); > } > {code} > since SparkTask didn't implement shutdown method to kill the running spark > job, the spark job may be still running after the query is cancelled. So it > will be good to kill the spark job in SparkTask.shutdown to save cluster > resource. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16268) enable incremental repl dump to handle functions metadata
[ https://issues.apache.org/jira/browse/HIVE-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-16268: --- Remaining Estimate: 72h (was: 48h) Original Estimate: 72h (was: 48h) > enable incremental repl dump to handle functions metadata > - > > Key: HIVE-16268 > URL: https://issues.apache.org/jira/browse/HIVE-16268 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 2.2.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Original Estimate: 72h > Remaining Estimate: 72h > > this is created separately to ensure that any other metadata related to > replication which comes from replication spec, if they are needed as part of > the function dump output when doing incremental update. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16268) enable incremental repl dump to handle functions metadata
[ https://issues.apache.org/jira/browse/HIVE-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-16268: --- Remaining Estimate: 48h Original Estimate: 48h Fix Version/s: 3.0.0 > enable incremental repl dump to handle functions metadata > - > > Key: HIVE-16268 > URL: https://issues.apache.org/jira/browse/HIVE-16268 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 2.2.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Original Estimate: 48h > Remaining Estimate: 48h > > this is created separately to ensure that any other metadata related to > replication which comes from replication spec, if they are needed as part of > the function dump output when doing incremental update. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-12614) RESET command does not close spark session
[ https://issues.apache.org/jira/browse/HIVE-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965331#comment-15965331 ] Hive QA commented on HIVE-12614: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862958/HIVE-12614.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10572 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4649/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4649/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4649/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862958 - PreCommit-HIVE-Build > RESET command does not close spark session > -- > > Key: HIVE-12614 > URL: https://issues.apache.org/jira/browse/HIVE-12614 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 1.3.0, 2.1.0 >Reporter: Nemon Lou >Assignee: Sahil Takiar >Priority: Minor > Attachments: HIVE-12614.1.patch, HIVE-12614.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16423) De-duplicate semijoin branches and add hint to enforce semi join optimization
[ https://issues.apache.org/jira/browse/HIVE-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-16423: -- Attachment: HIVE-16423.1.patch Initial patch. > De-duplicate semijoin branches and add hint to enforce semi join optimization > - > > Key: HIVE-16423 > URL: https://issues.apache.org/jira/browse/HIVE-16423 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-16423.1.patch > > > Currently in an n-way join, a semi join branch is created n times. Instead, > it should reuse the same branch. > Add hints in semijoin to enforce particular semi join optimization. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16423) De-duplicate semijoin branches and add hint to enforce semi join optimization
[ https://issues.apache.org/jira/browse/HIVE-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-16423: -- Status: Patch Available (was: In Progress) > De-duplicate semijoin branches and add hint to enforce semi join optimization > - > > Key: HIVE-16423 > URL: https://issues.apache.org/jira/browse/HIVE-16423 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > > Currently in an n-way join, a semi join branch is created n times. Instead, > it should reuse the same branch. > Add hints in semijoin to enforce particular semi join optimization. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16289) add hints for semijoin reduction
[ https://issues.apache.org/jira/browse/HIVE-16289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965307#comment-15965307 ] Deepak Jaiswal commented on HIVE-16289: --- Wont be fixed here. > add hints for semijoin reduction > > > Key: HIVE-16289 > URL: https://issues.apache.org/jira/browse/HIVE-16289 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Deepak Jaiswal > Attachments: HIVE-16289.01.patch, HIVE-16289.patch > > > For now hints will only impact bloom filter size if semijoin is enabled. > In a follow-up, after some cost-based semi-join decision logic is added, they > may also influence it. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (HIVE-16423) De-duplicate semijoin branches and add hint to enforce semi join optimization
[ https://issues.apache.org/jira/browse/HIVE-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-16423 started by Deepak Jaiswal. - > De-duplicate semijoin branches and add hint to enforce semi join optimization > - > > Key: HIVE-16423 > URL: https://issues.apache.org/jira/browse/HIVE-16423 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > > Currently in an n-way join, a semi join branch is created n times. Instead, > it should reuse the same branch. > Add hints in semijoin to enforce particular semi join optimization. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16289) add hints for semijoin reduction
[ https://issues.apache.org/jira/browse/HIVE-16289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-16289: - Assignee: Deepak Jaiswal (was: Sergey Shelukhin) > add hints for semijoin reduction > > > Key: HIVE-16289 > URL: https://issues.apache.org/jira/browse/HIVE-16289 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Deepak Jaiswal > Attachments: HIVE-16289.01.patch, HIVE-16289.patch > > > For now hints will only impact bloom filter size if semijoin is enabled. > In a follow-up, after some cost-based semi-join decision logic is added, they > may also influence it. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16423) De-duplicate semijoin branches and add hint to enforce semi join optimization
[ https://issues.apache.org/jira/browse/HIVE-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-16423: - > De-duplicate semijoin branches and add hint to enforce semi join optimization > - > > Key: HIVE-16423 > URL: https://issues.apache.org/jira/browse/HIVE-16423 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > > Currently in an n-way join, a semi join branch is created n times. Instead, > it should reuse the same branch. > Add hints in semijoin to enforce particular semi join optimization. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16387) Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
[ https://issues.apache.org/jira/browse/HIVE-16387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-16387: --- Status: Patch Available (was: Open) > Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > --- > > Key: HIVE-16387 > URL: https://issues.apache.org/jira/browse/HIVE-16387 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-16387.01.patch, HIVE-16387.02.patch, > HIVE-16387.03.patch, HIVE-16387.04.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16387) Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
[ https://issues.apache.org/jira/browse/HIVE-16387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-16387: --- Status: Open (was: Patch Available) > Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > --- > > Key: HIVE-16387 > URL: https://issues.apache.org/jira/browse/HIVE-16387 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-16387.01.patch, HIVE-16387.02.patch, > HIVE-16387.03.patch, HIVE-16387.04.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16387) Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
[ https://issues.apache.org/jira/browse/HIVE-16387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-16387: --- Attachment: HIVE-16387.04.patch > Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > --- > > Key: HIVE-16387 > URL: https://issues.apache.org/jira/browse/HIVE-16387 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-16387.01.patch, HIVE-16387.02.patch, > HIVE-16387.03.patch, HIVE-16387.04.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-11133) Support hive.explain.user for Spark [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-11133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965295#comment-15965295 ] Hive QA commented on HIVE-11133: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862945/HIVE-11133.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 327 failed/errored test(s), 10571 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=144) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucket6] (batchId=166) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[constprog_partitioner] (batchId=168) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[constprog_semijoin] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[gen_udf_example_add10] (batchId=166) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[index_bitmap3] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[index_bitmap_auto] (batchId=166) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[infer_bucket_sort_map_operators] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[infer_bucket_sort_num_buckets] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[parallel_orderby] (batchId=167) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[add_part_multiple] (batchId=127) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[annotate_stats_join] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join10] (batchId=113) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join11] (batchId=102) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join12] (batchId=108) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join13] (batchId=133) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join14] (batchId=104) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join15] (batchId=105) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join16] (batchId=114) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join17] (batchId=133) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join18] (batchId=103) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join18_multi_distinct] (batchId=109) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join19] (batchId=125) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join20] (batchId=136) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join22] (batchId=121) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join23] (batchId=106) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join24] (batchId=130) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join26] (batchId=104) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join27] (batchId=136) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join28] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join2] (batchId=125) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join31] (batchId=117) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join3] (batchId=133) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join4] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join5] (batchId=129) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join6] (batchId=134) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join7] (batchId=110) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join8] (batchId=135) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join9] (batchId=131) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_stats2] (batchId=135) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_stats] (batchId=118) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_without_localtask] (batchId=98) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_smb_mapjoin_14] (batchId=123) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=100) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_4] (batchId=109) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_6] (batchId=101) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=120)
[jira] [Assigned] (HIVE-11297) Combine op trees for partition info generating tasks [Spark branch]
[ https://issues.apache.org/jira/browse/HIVE-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liyunzhang_intel reassigned HIVE-11297: --- Assignee: liyunzhang_intel > Combine op trees for partition info generating tasks [Spark branch] > --- > > Key: HIVE-11297 > URL: https://issues.apache.org/jira/browse/HIVE-11297 > Project: Hive > Issue Type: Bug >Affects Versions: spark-branch >Reporter: Chao Sun >Assignee: liyunzhang_intel > > Currently, for dynamic partition pruning in Spark, if a small table generates > partition info for more than one partition columns, multiple operator trees > are created, which all start from the same table scan op, but have different > spark partition pruning sinks. > As an optimization, we can combine these op trees and so don't have to do > table scan multiple times. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16422) Should kill running Spark Jobs when a query is cancelled.
[ https://issues.apache.org/jira/browse/HIVE-16422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HIVE-16422: - Attachment: HIVE-16422.000.txt > Should kill running Spark Jobs when a query is cancelled. > - > > Key: HIVE-16422 > URL: https://issues.apache.org/jira/browse/HIVE-16422 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 2.1.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-16422.000.txt > > > Should kill running Spark Jobs when a query is cancelled. When a query is > cancelled, Driver.releaseDriverContext will be called by Driver.close. > releaseDriverContext will call DriverContext.shutdown which will call all the > running tasks' shutdown. > {code} > public synchronized void shutdown() { > LOG.debug("Shutting down query " + ctx.getCmd()); > shutdown = true; > for (TaskRunner runner : running) { > if (runner.isRunning()) { > Task task = runner.getTask(); > LOG.warn("Shutting down task : " + task); > try { > task.shutdown(); > } catch (Exception e) { > console.printError("Exception on shutting down task " + > task.getId() + ": " + e); > } > Thread thread = runner.getRunner(); > if (thread != null) { > thread.interrupt(); > } > } > } > running.clear(); > } > {code} > since SparkTask didn't implement shutdown method to kill the running spark > job, the spark job may be still running after the query is cancelled. So it > will be good to kill the spark job in SparkTask.shutdown to save cluster > resource. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16422) Should kill running Spark Jobs when a query is cancelled.
[ https://issues.apache.org/jira/browse/HIVE-16422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HIVE-16422: - Status: Patch Available (was: Open) > Should kill running Spark Jobs when a query is cancelled. > - > > Key: HIVE-16422 > URL: https://issues.apache.org/jira/browse/HIVE-16422 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 2.1.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-16422.000.txt > > > Should kill running Spark Jobs when a query is cancelled. When a query is > cancelled, Driver.releaseDriverContext will be called by Driver.close. > releaseDriverContext will call DriverContext.shutdown which will call all the > running tasks' shutdown. > {code} > public synchronized void shutdown() { > LOG.debug("Shutting down query " + ctx.getCmd()); > shutdown = true; > for (TaskRunner runner : running) { > if (runner.isRunning()) { > Task task = runner.getTask(); > LOG.warn("Shutting down task : " + task); > try { > task.shutdown(); > } catch (Exception e) { > console.printError("Exception on shutting down task " + > task.getId() + ": " + e); > } > Thread thread = runner.getRunner(); > if (thread != null) { > thread.interrupt(); > } > } > } > running.clear(); > } > {code} > since SparkTask didn't implement shutdown method to kill the running spark > job, the spark job may be still running after the query is cancelled. So it > will be good to kill the spark job in SparkTask.shutdown to save cluster > resource. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16422) Should kill running Spark Jobs when a query is cancelled.
[ https://issues.apache.org/jira/browse/HIVE-16422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu reassigned HIVE-16422: > Should kill running Spark Jobs when a query is cancelled. > - > > Key: HIVE-16422 > URL: https://issues.apache.org/jira/browse/HIVE-16422 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 2.1.0 >Reporter: zhihai xu >Assignee: zhihai xu > > Should kill running Spark Jobs when a query is cancelled. When a query is > cancelled, Driver.releaseDriverContext will be called by Driver.close. > releaseDriverContext will call DriverContext.shutdown which will call all the > running tasks' shutdown. > {code} > public synchronized void shutdown() { > LOG.debug("Shutting down query " + ctx.getCmd()); > shutdown = true; > for (TaskRunner runner : running) { > if (runner.isRunning()) { > Task task = runner.getTask(); > LOG.warn("Shutting down task : " + task); > try { > task.shutdown(); > } catch (Exception e) { > console.printError("Exception on shutting down task " + > task.getId() + ": " + e); > } > Thread thread = runner.getRunner(); > if (thread != null) { > thread.interrupt(); > } > } > } > running.clear(); > } > {code} > since SparkTask didn't implement shutdown method to kill the running spark > job, the spark job may be still running after the query is cancelled. So it > will be good to kill the spark job in SparkTask.shutdown to save cluster > resource. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-12614) RESET command does not close spark session
[ https://issues.apache.org/jira/browse/HIVE-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-12614: Attachment: HIVE-12614.2.patch Adding unit tests > RESET command does not close spark session > -- > > Key: HIVE-12614 > URL: https://issues.apache.org/jira/browse/HIVE-12614 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 1.3.0, 2.1.0 >Reporter: Nemon Lou >Assignee: Sahil Takiar >Priority: Minor > Attachments: HIVE-12614.1.patch, HIVE-12614.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16419) Exclude hadoop related classes for JDBC stabdalone jar
[ https://issues.apache.org/jira/browse/HIVE-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965262#comment-15965262 ] Gopal V commented on HIVE-16419: [~taoli-hwx]: does this patch make -standalone jars need additional jars to work as a jdbc driver? > Exclude hadoop related classes for JDBC stabdalone jar > -- > > Key: HIVE-16419 > URL: https://issues.apache.org/jira/browse/HIVE-16419 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Tao Li >Assignee: Tao Li >Priority: Blocker > Attachments: HIVE-16419.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16193) Hive show compactions not reflecting the status of the application
[ https://issues.apache.org/jira/browse/HIVE-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965261#comment-15965261 ] Hive QA commented on HIVE-16193: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862943/HIVE-16193.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10570 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_7] (batchId=234) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4647/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4647/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4647/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862943 - PreCommit-HIVE-Build > Hive show compactions not reflecting the status of the application > -- > > Key: HIVE-16193 > URL: https://issues.apache.org/jira/browse/HIVE-16193 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Kavan Suresh >Assignee: Wei Zheng > Attachments: HIVE-16193.1.patch, HIVE-16193.2.patch > > > In a test for [HIVE-13354|https://issues.apache.org/jira/browse/HIVE-13354], > we set properties to make the compaction fail. Recently show compactions > indicates that compactions have been succeeding on the tables though the > corresponding application gets killed as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16285) Servlet for dynamically configuring log levels
[ https://issues.apache.org/jira/browse/HIVE-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965225#comment-15965225 ] Hive QA commented on HIVE-16285: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862927/HIVE-16285.5.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10570 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[router_join_ppr] (batchId=75) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[update_all_non_partitioned] (batchId=7) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=143) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4646/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4646/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4646/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862927 - PreCommit-HIVE-Build > Servlet for dynamically configuring log levels > -- > > Key: HIVE-16285 > URL: https://issues.apache.org/jira/browse/HIVE-16285 > Project: Hive > Issue Type: Improvement > Components: Logging >Affects Versions: 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-16285.1.patch, HIVE-16285.2.patch, > HIVE-16285.3.patch, HIVE-16285.4.patch, HIVE-16285.5.patch > > > Many long running services like HS2, LLAP etc. will benefit from having an > endpoint to dynamically change log levels for various loggers. This will help > greatly with debuggability without requiring a restart of the service. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16239) remove useless hiveserver
[ https://issues.apache.org/jira/browse/HIVE-16239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-16239: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to branch-2.0 and branch 2.1. Thanks [~ferhui] for the contribution. > remove useless hiveserver > - > > Key: HIVE-16239 > URL: https://issues.apache.org/jira/browse/HIVE-16239 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 2.0.1, 2.1.1 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HIVE-16239.1-branch-2.0.patch, > HIVE-16239.1-branch-2.1.patch, HIVE-16239.2-branch-2.0.patch, > HIVE-16239.2-branch-2.1.patch > > > {quote} > [hadoop@header hive]$ hive --service hiveserver > Starting Hive Thrift Server > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/apps/apache-hive-2.0.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/apps/spark-1.6.2-bin-hadoop2.7/lib/spark-assembly-1.6.2-hadoop2.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/apps/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Exception in thread "main" java.lang.ClassNotFoundException: > org.apache.hadoop.hive.service.HiveServer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at org.apache.hadoop.util.RunJar.run(RunJar.java:214) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {quote} > hiveserver does not exist, we should remove hiveserver from cli on branch-2.0 > After removing it, we get useful message > {quote} > Service hiveserver not found > Available Services: beeline cli hbaseimport hbaseschematool help > hiveburninclient hiveserver2 hplsql hwi jar lineage llap metastore metatool > orcfiledump rcfilecat schemaTool version > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15441) Provide a config to timeout long compiling queries
[ https://issues.apache.org/jira/browse/HIVE-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965180#comment-15965180 ] Hive QA commented on HIVE-15441: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12843512/HIVE-15441.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4645/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4645/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4645/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-04-12 00:11:24.976 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-4645/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-04-12 00:11:24.979 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 3bf477a HIVE-16403 : LLAP UI shows the wrong number of executors (Sergey Shelukhin, reviewed by Gopal Vijayaraghavan) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 3bf477a HIVE-16403 : LLAP UI shows the wrong number of executors (Sergey Shelukhin, reviewed by Gopal Vijayaraghavan) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-04-12 00:11:25.618 + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/Driver.java:429 error: ql/src/java/org/apache/hadoop/hive/ql/Driver.java: patch does not apply The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12843512 - PreCommit-HIVE-Build > Provide a config to timeout long compiling queries > -- > > Key: HIVE-15441 > URL: https://issues.apache.org/jira/browse/HIVE-15441 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Reporter: Chao Sun >Assignee: Chao Sun > Attachments: HIVE-15441.1.patch > > > Sometimes Hive users have long compiling queries which may need to scan > thousands or even more partitions (perhaps by accident). The compilation > process may take a very long time, especially in {{getInputSummary}} where it > need to make NN calls to get info about each input path. > This is bad because it may block many other queries. Parallel compilation may > be useful but still {{getInputSummary}} has a global lock. In this case, it > makes sense to provide Hive admin with a config to put a timeout limit for > compilation, so that these "bad" queries can be blocked. > Note https://issues.apache.org/jira/browse/HIVE-12431 also tries to address > similar issue. However it cancels those queries that are waiting for the > compile lock, which I think is not so useful for our case since the *query > under compile is the one to be blamed.* -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16387) Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
[ https://issues.apache.org/jira/browse/HIVE-16387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965179#comment-15965179 ] Hive QA commented on HIVE-16387: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862921/HIVE-16387.03.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 10552 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_select] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fold_case] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[literal_decimal] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udtf_stack] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_null_projection] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_nvl] (batchId=68) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=143) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_null_projection] (batchId=143) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_nvl] (batchId=155) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true] (batchId=88) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vector_count_distinct] (batchId=109) org.apache.hive.hcatalog.api.TestHCatClient.org.apache.hive.hcatalog.api.TestHCatClient (batchId=175) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4644/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4644/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4644/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862921 - PreCommit-HIVE-Build > Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > --- > > Key: HIVE-16387 > URL: https://issues.apache.org/jira/browse/HIVE-16387 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-16387.01.patch, HIVE-16387.02.patch, > HIVE-16387.03.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-11133) Support hive.explain.user for Spark [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-11133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-11133: Attachment: HIVE-11133.2.patch Actually enables {{hive.explain.user}} for Spark. Had to do some minor refactoring to get it to work for Spark. > Support hive.explain.user for Spark [Spark Branch] > -- > > Key: HIVE-11133 > URL: https://issues.apache.org/jira/browse/HIVE-11133 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Mohit Sabharwal >Assignee: Sahil Takiar > Attachments: HIVE-11133.1.patch, HIVE-11133.2.patch > > > User friendly explain output ({{set hive.explain.user=true}}) should support > Spark as well. > Once supported, we should also enable related q-tests like {{explainuser_1.q}} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16193) Hive show compactions not reflecting the status of the application
[ https://issues.apache.org/jira/browse/HIVE-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-16193: - Status: Patch Available (was: Open) > Hive show compactions not reflecting the status of the application > -- > > Key: HIVE-16193 > URL: https://issues.apache.org/jira/browse/HIVE-16193 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Kavan Suresh >Assignee: Wei Zheng > Attachments: HIVE-16193.1.patch, HIVE-16193.2.patch > > > In a test for [HIVE-13354|https://issues.apache.org/jira/browse/HIVE-13354], > we set properties to make the compaction fail. Recently show compactions > indicates that compactions have been succeeding on the tables though the > corresponding application gets killed as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16193) Hive show compactions not reflecting the status of the application
[ https://issues.apache.org/jira/browse/HIVE-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965159#comment-15965159 ] Wei Zheng commented on HIVE-16193: -- patch 2 removed the blocking call. I will see if I can add a test for it. > Hive show compactions not reflecting the status of the application > -- > > Key: HIVE-16193 > URL: https://issues.apache.org/jira/browse/HIVE-16193 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Kavan Suresh >Assignee: Wei Zheng > Attachments: HIVE-16193.1.patch, HIVE-16193.2.patch > > > In a test for [HIVE-13354|https://issues.apache.org/jira/browse/HIVE-13354], > we set properties to make the compaction fail. Recently show compactions > indicates that compactions have been succeeding on the tables though the > corresponding application gets killed as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16193) Hive show compactions not reflecting the status of the application
[ https://issues.apache.org/jira/browse/HIVE-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-16193: - Attachment: HIVE-16193.2.patch > Hive show compactions not reflecting the status of the application > -- > > Key: HIVE-16193 > URL: https://issues.apache.org/jira/browse/HIVE-16193 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Kavan Suresh >Assignee: Wei Zheng > Attachments: HIVE-16193.1.patch, HIVE-16193.2.patch > > > In a test for [HIVE-13354|https://issues.apache.org/jira/browse/HIVE-13354], > we set properties to make the compaction fail. Recently show compactions > indicates that compactions have been succeeding on the tables though the > corresponding application gets killed as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16421) Runtime filtering breaks user-level explain
[ https://issues.apache.org/jira/browse/HIVE-16421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-16421: Description: Query: SELECT LAG(COALESCE(t2.int_col_14, t1.int_col_80),22) OVER (ORDER BY t1.tinyint_col_52 DESC) AS int_col FROM table_6 t1 INNER JOIN table_14 t2 ON ((t2.decimal0101_col_55) = (t1.decimal0101_col_9)); Without runtime filtering +-+--+ | Explain | +-+--+ | Plan not optimized by CBO. | | | | Vertex dependency in root stage | | Map 1 <- Map 3 (BROADCAST_EDGE) | | Reducer 2 <- Map 1 (SIMPLE_EDGE) | | | | Stage-0 | |Fetch Operator | | limit:-1 | | Stage-1 | | Reducer 2 | | File Output Operator [FS_364] | | compressed:false | | Statistics:Num rows: 74781721 Data size: 299126884 Basic stats: COMPLETE Column stats: COMPLETE | | table:{"input format:":"org.apache.hadoop.mapred.TextInputFormat","output format:":"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat","serde:":"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"} | | Select Operator [SEL_362] | |outputColumnNames:["_col0"]
[jira] [Commented] (HIVE-15986) Support "is [not] distinct from"
[ https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965104#comment-15965104 ] Hive QA commented on HIVE-15986: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862920/HIVE-15986.5.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10571 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[columnstats_part_coltype] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=143) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4643/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4643/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4643/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862920 - PreCommit-HIVE-Build > Support "is [not] distinct from" > > > Key: HIVE-15986 > URL: https://issues.apache.org/jira/browse/HIVE-15986 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Vineet Garg > Attachments: HIVE-15986.1.patch, HIVE-15986.2.patch, > HIVE-15986.3.patch, HIVE-15986.4.patch, HIVE-15986.5.patch > > > Support standard "is [not] distinct from" syntax. For example this gives a > standard way to do a comparison to null safe join: select * from t1 join t2 > on t1.x is not distinct from t2.y. SQL standard reference Section 8.15 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16421) Runtime filtering breaks user-level explain
[ https://issues.apache.org/jira/browse/HIVE-16421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong reassigned HIVE-16421: -- > Runtime filtering breaks user-level explain > --- > > Key: HIVE-16421 > URL: https://issues.apache.org/jira/browse/HIVE-16421 > Project: Hive > Issue Type: Bug >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16285) Servlet for dynamically configuring log levels
[ https://issues.apache.org/jira/browse/HIVE-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965038#comment-15965038 ] Hive QA commented on HIVE-16285: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862918/HIVE-16285.4.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10570 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4642/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4642/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4642/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862918 - PreCommit-HIVE-Build > Servlet for dynamically configuring log levels > -- > > Key: HIVE-16285 > URL: https://issues.apache.org/jira/browse/HIVE-16285 > Project: Hive > Issue Type: Improvement > Components: Logging >Affects Versions: 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-16285.1.patch, HIVE-16285.2.patch, > HIVE-16285.3.patch, HIVE-16285.4.patch, HIVE-16285.5.patch > > > Many long running services like HS2, LLAP etc. will benefit from having an > endpoint to dynamically change log levels for various loggers. This will help > greatly with debuggability without requiring a restart of the service. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16285) Servlet for dynamically configuring log levels
[ https://issues.apache.org/jira/browse/HIVE-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-16285: - Attachment: HIVE-16285.5.patch Minor changes to http headers. > Servlet for dynamically configuring log levels > -- > > Key: HIVE-16285 > URL: https://issues.apache.org/jira/browse/HIVE-16285 > Project: Hive > Issue Type: Improvement > Components: Logging >Affects Versions: 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-16285.1.patch, HIVE-16285.2.patch, > HIVE-16285.3.patch, HIVE-16285.4.patch, HIVE-16285.5.patch > > > Many long running services like HS2, LLAP etc. will benefit from having an > endpoint to dynamically change log levels for various loggers. This will help > greatly with debuggability without requiring a restart of the service. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16328) HoS: more aggressive mapjoin optimization when hive.spark.use.file.size.for.mapjoin is true
[ https://issues.apache.org/jira/browse/HIVE-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HIVE-16328: Description: In HIVE-15489, when {{hive.spark.use.ts.stats.for.mapjoin}} is set to true, and if the JOIN op has any upstream RS operator, then we will stop converting the JOIN op to MAPJOIN op. However, this is overly conservative. A better solution is to treat the branch that has upstream RS as the big table and check if all other branches are map-only AND can fit in hash table size. was: In HIVE-15489, when {{hive.spark.use.file.size.for.mapjoin}} is set to true, and if the JOIN op has any upstream RS operator, then we will stop converting the JOIN op to MAPJOIN op. However, this is overly conservative. A better solution is to treat the branch that has upstream RS as the big table and check if all other branches are map-only AND can fit in hash table size. > HoS: more aggressive mapjoin optimization when > hive.spark.use.file.size.for.mapjoin is true > --- > > Key: HIVE-16328 > URL: https://issues.apache.org/jira/browse/HIVE-16328 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Reporter: Chao Sun >Assignee: Chao Sun > Attachments: HIVE-16328.1.patch, HIVE-16328.2.patch > > > In HIVE-15489, when {{hive.spark.use.ts.stats.for.mapjoin}} is set to true, > and if the JOIN op has any upstream RS operator, then we will stop converting > the JOIN op to MAPJOIN op. > However, this is overly conservative. A better solution is to treat the > branch that has upstream RS as the big table and check if all other branches > are map-only AND can fit in hash table size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16385) StatsNoJobTask could exit early before all partitions have been processed
[ https://issues.apache.org/jira/browse/HIVE-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965008#comment-15965008 ] Chao Sun commented on HIVE-16385: - > Chao Sun, do you intend to commit this to branch-2.3, branch-2.2, branch-2.1, > and branch-2.0? If not, please change the fix version to 3.0.0 (master). > Thanks. Oops. No that's not what I intended. Fixed. Thanks! > StatsNoJobTask could exit early before all partitions have been processed > - > > Key: HIVE-16385 > URL: https://issues.apache.org/jira/browse/HIVE-16385 > Project: Hive > Issue Type: Bug > Components: Statistics >Reporter: Chao Sun >Assignee: Chao Sun > Fix For: 3.0.0 > > Attachments: HIVE-16385.1.patch > > > For a partitioned table, the class {{StatsNoJobTask}} is supposed to launch > threads for all partitions and compute their stats. However, it could exit > early after at most 100 seconds: > {code} > private void shutdownAndAwaitTermination(ExecutorService threadPool) { > // Disable new tasks from being submitted > threadPool.shutdown(); > try { > // Wait a while for existing tasks to terminate > if (!threadPool.awaitTermination(100, TimeUnit.SECONDS)) { > // Cancel currently executing tasks > threadPool.shutdownNow(); > // Wait a while for tasks to respond to being cancelled > if (!threadPool.awaitTermination(100, TimeUnit.SECONDS)) { > LOG.debug("Stats collection thread pool did not terminate"); > } > } > } catch (InterruptedException ie) { > // Cancel again if current thread also interrupted > threadPool.shutdownNow(); > // Preserve interrupt status > Thread.currentThread().interrupt(); > } > } > {code} > The {{shutdown}} call does not wait for all submitted tasks to complete, and > the {{awaitTermination}} call waits at most 100 seconds. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HIVE-16385) StatsNoJobTask could exit early before all partitions have been processed
[ https://issues.apache.org/jira/browse/HIVE-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965008#comment-15965008 ] Chao Sun edited comment on HIVE-16385 at 4/11/17 9:45 PM: -- {quote} Chao Sun, do you intend to commit this to branch-2.3, branch-2.2, branch-2.1, and branch-2.0? If not, please change the fix version to 3.0.0 (master). Thanks. {quote} Oops. No that's not what I intended. Fixed. Thanks! was (Author: csun): > Chao Sun, do you intend to commit this to branch-2.3, branch-2.2, branch-2.1, > and branch-2.0? If not, please change the fix version to 3.0.0 (master). > Thanks. Oops. No that's not what I intended. Fixed. Thanks! > StatsNoJobTask could exit early before all partitions have been processed > - > > Key: HIVE-16385 > URL: https://issues.apache.org/jira/browse/HIVE-16385 > Project: Hive > Issue Type: Bug > Components: Statistics >Reporter: Chao Sun >Assignee: Chao Sun > Fix For: 3.0.0 > > Attachments: HIVE-16385.1.patch > > > For a partitioned table, the class {{StatsNoJobTask}} is supposed to launch > threads for all partitions and compute their stats. However, it could exit > early after at most 100 seconds: > {code} > private void shutdownAndAwaitTermination(ExecutorService threadPool) { > // Disable new tasks from being submitted > threadPool.shutdown(); > try { > // Wait a while for existing tasks to terminate > if (!threadPool.awaitTermination(100, TimeUnit.SECONDS)) { > // Cancel currently executing tasks > threadPool.shutdownNow(); > // Wait a while for tasks to respond to being cancelled > if (!threadPool.awaitTermination(100, TimeUnit.SECONDS)) { > LOG.debug("Stats collection thread pool did not terminate"); > } > } > } catch (InterruptedException ie) { > // Cancel again if current thread also interrupted > threadPool.shutdownNow(); > // Preserve interrupt status > Thread.currentThread().interrupt(); > } > } > {code} > The {{shutdown}} call does not wait for all submitted tasks to complete, and > the {{awaitTermination}} call waits at most 100 seconds. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16193) Hive show compactions not reflecting the status of the application
[ https://issues.apache.org/jira/browse/HIVE-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965004#comment-15965004 ] Eugene Koifman commented on HIVE-16193: --- seems to me like this will revert HIVE-15851 > Hive show compactions not reflecting the status of the application > -- > > Key: HIVE-16193 > URL: https://issues.apache.org/jira/browse/HIVE-16193 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Kavan Suresh >Assignee: Wei Zheng > Attachments: HIVE-16193.1.patch > > > In a test for [HIVE-13354|https://issues.apache.org/jira/browse/HIVE-13354], > we set properties to make the compaction fail. Recently show compactions > indicates that compactions have been succeeding on the tables though the > corresponding application gets killed as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16385) StatsNoJobTask could exit early before all partitions have been processed
[ https://issues.apache.org/jira/browse/HIVE-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HIVE-16385: Fix Version/s: (was: 2.3.0) (was: 2.1.2) (was: 2.0.2) (was: 2.2.0) 3.0.0 > StatsNoJobTask could exit early before all partitions have been processed > - > > Key: HIVE-16385 > URL: https://issues.apache.org/jira/browse/HIVE-16385 > Project: Hive > Issue Type: Bug > Components: Statistics >Reporter: Chao Sun >Assignee: Chao Sun > Fix For: 3.0.0 > > Attachments: HIVE-16385.1.patch > > > For a partitioned table, the class {{StatsNoJobTask}} is supposed to launch > threads for all partitions and compute their stats. However, it could exit > early after at most 100 seconds: > {code} > private void shutdownAndAwaitTermination(ExecutorService threadPool) { > // Disable new tasks from being submitted > threadPool.shutdown(); > try { > // Wait a while for existing tasks to terminate > if (!threadPool.awaitTermination(100, TimeUnit.SECONDS)) { > // Cancel currently executing tasks > threadPool.shutdownNow(); > // Wait a while for tasks to respond to being cancelled > if (!threadPool.awaitTermination(100, TimeUnit.SECONDS)) { > LOG.debug("Stats collection thread pool did not terminate"); > } > } > } catch (InterruptedException ie) { > // Cancel again if current thread also interrupted > threadPool.shutdownNow(); > // Preserve interrupt status > Thread.currentThread().interrupt(); > } > } > {code} > The {{shutdown}} call does not wait for all submitted tasks to complete, and > the {{awaitTermination}} call waits at most 100 seconds. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16415) Add blobstore tests for insertion of zero rows
[ https://issues.apache.org/jira/browse/HIVE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964989#comment-15964989 ] Thomas Poepping commented on HIVE-16415: Seems to me that this case, while similar, is different enough to warrant its own jira. My suggestion: a follow-up to HIVE-14519 that fixes what is still broken, and creates tests for that. Agree? I can create that jira if you like, but I don't foresee myself being able to work on it anytime soon. > Add blobstore tests for insertion of zero rows > -- > > Key: HIVE-16415 > URL: https://issues.apache.org/jira/browse/HIVE-16415 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 2.1.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Attachments: HIVE-16415.patch > > > This patch introduces two regression tests into the hive-blobstore qtest > module: zero_rows_hdfs.q and zero_rows_blobstore.q. These test doing INSERT > commands with a WHERE clause where the condition of the WHERE clause causes > zero rows to be considered. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16328) HoS: more aggressive mapjoin optimization when hive.spark.use.file.size.for.mapjoin is true
[ https://issues.apache.org/jira/browse/HIVE-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964986#comment-15964986 ] Xuefu Zhang commented on HIVE-16328: +1 > HoS: more aggressive mapjoin optimization when > hive.spark.use.file.size.for.mapjoin is true > --- > > Key: HIVE-16328 > URL: https://issues.apache.org/jira/browse/HIVE-16328 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Reporter: Chao Sun >Assignee: Chao Sun > Attachments: HIVE-16328.1.patch, HIVE-16328.2.patch > > > In HIVE-15489, when {{hive.spark.use.file.size.for.mapjoin}} is set to true, > and if the JOIN op has any upstream RS operator, then we will stop converting > the JOIN op to MAPJOIN op. > However, this is overly conservative. A better solution is to treat the > branch that has upstream RS as the big table and check if all other branches > are map-only AND can fit in hash table size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16415) Add blobstore tests for insertion of zero rows
[ https://issues.apache.org/jira/browse/HIVE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964983#comment-15964983 ] Ashutosh Chauhan commented on HIVE-16415: - I am suggesting that scenarios you are adding tests for are still buggy and need fix. If your goal is to add tests so that you catch bugs and fix those then this bug need to be dealt with (either in this jira or a new one). If your goal is to only add tests for known working scenarios, then what you have in current patch is good enough. > Add blobstore tests for insertion of zero rows > -- > > Key: HIVE-16415 > URL: https://issues.apache.org/jira/browse/HIVE-16415 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 2.1.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Attachments: HIVE-16415.patch > > > This patch introduces two regression tests into the hive-blobstore qtest > module: zero_rows_hdfs.q and zero_rows_blobstore.q. These test doing INSERT > commands with a WHERE clause where the condition of the WHERE clause causes > zero rows to be considered. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16387) Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
[ https://issues.apache.org/jira/browse/HIVE-16387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-16387: --- Status: Patch Available (was: Open) > Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > --- > > Key: HIVE-16387 > URL: https://issues.apache.org/jira/browse/HIVE-16387 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-16387.01.patch, HIVE-16387.02.patch, > HIVE-16387.03.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16387) Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
[ https://issues.apache.org/jira/browse/HIVE-16387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-16387: --- Attachment: HIVE-16387.03.patch > Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > --- > > Key: HIVE-16387 > URL: https://issues.apache.org/jira/browse/HIVE-16387 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-16387.01.patch, HIVE-16387.02.patch, > HIVE-16387.03.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16411) Revert HIVE-15199
[ https://issues.apache.org/jira/browse/HIVE-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964974#comment-15964974 ] Ashutosh Chauhan commented on HIVE-16411: - Thanks for taking this up. > Revert HIVE-15199 > - > > Key: HIVE-16411 > URL: https://issues.apache.org/jira/browse/HIVE-16411 > Project: Hive > Issue Type: Task >Reporter: Ashutosh Chauhan >Assignee: Sahil Takiar >Priority: Blocker > > No longer required after HIVE-16402 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HIVE-15986) Support "is [not] distinct from"
[ https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964972#comment-15964972 ] Vineet Garg edited comment on HIVE-15986 at 4/11/17 9:17 PM: - {{HIVE-15986.5.patch}} rewrites {{is distinct from}} into {{not <=>}} instead of introducing new tokens and UDFs. Thanks to [~pxiong] for providing grammar patch. was (Author: vgarg): {{HIVE-15986.5.patch}} rewrites {{is distinct from}} into {{ not <=> }} instead of introducing new tokens and UDFs. Thanks to [~pxiong] for providing grammar patch. > Support "is [not] distinct from" > > > Key: HIVE-15986 > URL: https://issues.apache.org/jira/browse/HIVE-15986 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Vineet Garg > Attachments: HIVE-15986.1.patch, HIVE-15986.2.patch, > HIVE-15986.3.patch, HIVE-15986.4.patch, HIVE-15986.5.patch > > > Support standard "is [not] distinct from" syntax. For example this gives a > standard way to do a comparison to null safe join: select * from t1 join t2 > on t1.x is not distinct from t2.y. SQL standard reference Section 8.15 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16387) Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
[ https://issues.apache.org/jira/browse/HIVE-16387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-16387: --- Status: Open (was: Patch Available) > Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > --- > > Key: HIVE-16387 > URL: https://issues.apache.org/jira/browse/HIVE-16387 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-16387.01.patch, HIVE-16387.02.patch, > HIVE-16387.03.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15986) Support "is [not] distinct from"
[ https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-15986: --- Status: Patch Available (was: Open) > Support "is [not] distinct from" > > > Key: HIVE-15986 > URL: https://issues.apache.org/jira/browse/HIVE-15986 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Vineet Garg > Attachments: HIVE-15986.1.patch, HIVE-15986.2.patch, > HIVE-15986.3.patch, HIVE-15986.4.patch, HIVE-15986.5.patch > > > Support standard "is [not] distinct from" syntax. For example this gives a > standard way to do a comparison to null safe join: select * from t1 join t2 > on t1.x is not distinct from t2.y. SQL standard reference Section 8.15 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15986) Support "is [not] distinct from"
[ https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-15986: --- Attachment: HIVE-15986.5.patch {{HIVE-15986.5.patch}} rewrites {{is distinct from}} into {{ not <=> }} instead of introducing new tokens and UDFs. Thanks to [~pxiong] for providing grammar patch. > Support "is [not] distinct from" > > > Key: HIVE-15986 > URL: https://issues.apache.org/jira/browse/HIVE-15986 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Vineet Garg > Attachments: HIVE-15986.1.patch, HIVE-15986.2.patch, > HIVE-15986.3.patch, HIVE-15986.4.patch, HIVE-15986.5.patch > > > Support standard "is [not] distinct from" syntax. For example this gives a > standard way to do a comparison to null safe join: select * from t1 join t2 > on t1.x is not distinct from t2.y. SQL standard reference Section 8.15 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16415) Add blobstore tests for insertion of zero rows
[ https://issues.apache.org/jira/browse/HIVE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964971#comment-15964971 ] Thomas Poepping commented on HIVE-16415: [~ashutoshc] what is your suggestion? Should we edit this test to also cover this case? > Add blobstore tests for insertion of zero rows > -- > > Key: HIVE-16415 > URL: https://issues.apache.org/jira/browse/HIVE-16415 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 2.1.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Attachments: HIVE-16415.patch > > > This patch introduces two regression tests into the hive-blobstore qtest > module: zero_rows_hdfs.q and zero_rows_blobstore.q. These test doing INSERT > commands with a WHERE clause where the condition of the WHERE clause causes > zero rows to be considered. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15986) Support "is [not] distinct from"
[ https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-15986: --- Status: Open (was: Patch Available) > Support "is [not] distinct from" > > > Key: HIVE-15986 > URL: https://issues.apache.org/jira/browse/HIVE-15986 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Vineet Garg > Attachments: HIVE-15986.1.patch, HIVE-15986.2.patch, > HIVE-15986.3.patch, HIVE-15986.4.patch > > > Support standard "is [not] distinct from" syntax. For example this gives a > standard way to do a comparison to null safe join: select * from t1 join t2 > on t1.x is not distinct from t2.y. SQL standard reference Section 8.15 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16411) Revert HIVE-15199
[ https://issues.apache.org/jira/browse/HIVE-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964967#comment-15964967 ] Sahil Takiar commented on HIVE-16411: - I can take this up, I don't think it should be a straight {{git revert}} of HIVE-15199 though, there were some useful updates to some qtests in that JIRA. I will revert some of the additional logic added to {{Hive#mvFile}} since its no longer necessary. > Revert HIVE-15199 > - > > Key: HIVE-16411 > URL: https://issues.apache.org/jira/browse/HIVE-16411 > Project: Hive > Issue Type: Task >Reporter: Ashutosh Chauhan >Priority: Blocker > > No longer required after HIVE-16402 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16411) Revert HIVE-15199
[ https://issues.apache.org/jira/browse/HIVE-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar reassigned HIVE-16411: --- Assignee: Sahil Takiar > Revert HIVE-15199 > - > > Key: HIVE-16411 > URL: https://issues.apache.org/jira/browse/HIVE-16411 > Project: Hive > Issue Type: Task >Reporter: Ashutosh Chauhan >Assignee: Sahil Takiar >Priority: Blocker > > No longer required after HIVE-16402 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16285) Servlet for dynamically configuring log levels
[ https://issues.apache.org/jira/browse/HIVE-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-16285: - Attachment: HIVE-16285.4.patch Addressed [~gopalv]'s review comments. Http POST is now used for configuring log4j2. I have tested this only with curl (examples in javadoc) right now. I will create a follow up for configuring from the UI with form submission. > Servlet for dynamically configuring log levels > -- > > Key: HIVE-16285 > URL: https://issues.apache.org/jira/browse/HIVE-16285 > Project: Hive > Issue Type: Improvement > Components: Logging >Affects Versions: 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-16285.1.patch, HIVE-16285.2.patch, > HIVE-16285.3.patch, HIVE-16285.4.patch > > > Many long running services like HS2, LLAP etc. will benefit from having an > endpoint to dynamically change log levels for various loggers. This will help > greatly with debuggability without requiring a restart of the service. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16193) Hive show compactions not reflecting the status of the application
[ https://issues.apache.org/jira/browse/HIVE-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-16193: - Attachment: HIVE-16193.1.patch > Hive show compactions not reflecting the status of the application > -- > > Key: HIVE-16193 > URL: https://issues.apache.org/jira/browse/HIVE-16193 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Kavan Suresh >Assignee: Wei Zheng > Attachments: HIVE-16193.1.patch > > > In a test for [HIVE-13354|https://issues.apache.org/jira/browse/HIVE-13354], > we set properties to make the compaction fail. Recently show compactions > indicates that compactions have been succeeding on the tables though the > corresponding application gets killed as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16193) Hive show compactions not reflecting the status of the application
[ https://issues.apache.org/jira/browse/HIVE-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964949#comment-15964949 ] Wei Zheng commented on HIVE-16193: -- [~ekoifman] Can you review please? > Hive show compactions not reflecting the status of the application > -- > > Key: HIVE-16193 > URL: https://issues.apache.org/jira/browse/HIVE-16193 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Kavan Suresh >Assignee: Wei Zheng > Attachments: HIVE-16193.1.patch > > > In a test for [HIVE-13354|https://issues.apache.org/jira/browse/HIVE-13354], > we set properties to make the compaction fail. Recently show compactions > indicates that compactions have been succeeding on the tables though the > corresponding application gets killed as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16193) Hive show compactions not reflecting the status of the application
[ https://issues.apache.org/jira/browse/HIVE-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng reassigned HIVE-16193: Assignee: Wei Zheng (was: Eugene Koifman) > Hive show compactions not reflecting the status of the application > -- > > Key: HIVE-16193 > URL: https://issues.apache.org/jira/browse/HIVE-16193 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Kavan Suresh >Assignee: Wei Zheng > > In a test for [HIVE-13354|https://issues.apache.org/jira/browse/HIVE-13354], > we set properties to make the compaction fail. Recently show compactions > indicates that compactions have been succeeding on the tables though the > corresponding application gets killed as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16415) Add blobstore tests for insertion of zero rows
[ https://issues.apache.org/jira/browse/HIVE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964940#comment-15964940 ] Ashutosh Chauhan commented on HIVE-16415: - I did little bit more testing on this, taking an example query from HIVE-14519 and modified it as following: {code} >From (select * from src) a insert overwrite directory '/tmp/emp/dir1/' select key, value insert overwrite directory '/tmp/emp/dir2/' select 'header' limit 0 insert overwrite directory '/tmp/emp/dir3/' select key, value where key = 100; {code} This gives incorrect result in master. All dirs end up with 0 rows instead of just dir2. I think fix in HIVE-14519 is incomplete. > Add blobstore tests for insertion of zero rows > -- > > Key: HIVE-16415 > URL: https://issues.apache.org/jira/browse/HIVE-16415 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 2.1.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Attachments: HIVE-16415.patch > > > This patch introduces two regression tests into the hive-blobstore qtest > module: zero_rows_hdfs.q and zero_rows_blobstore.q. These test doing INSERT > commands with a WHERE clause where the condition of the WHERE clause causes > zero rows to be considered. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16415) Add blobstore tests for insertion of zero rows
[ https://issues.apache.org/jira/browse/HIVE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964925#comment-15964925 ] Sergio Peña commented on HIVE-16415: Got it. It was fixed then, but it's good to have this test. +1 > Add blobstore tests for insertion of zero rows > -- > > Key: HIVE-16415 > URL: https://issues.apache.org/jira/browse/HIVE-16415 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 2.1.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Attachments: HIVE-16415.patch > > > This patch introduces two regression tests into the hive-blobstore qtest > module: zero_rows_hdfs.q and zero_rows_blobstore.q. These test doing INSERT > commands with a WHERE clause where the condition of the WHERE clause causes > zero rows to be considered. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15982) Support the width_bucket function
[ https://issues.apache.org/jira/browse/HIVE-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964872#comment-15964872 ] Sahil Takiar commented on HIVE-15982: - Thanks [~cartershanklin]. I'm basing the implementation largely on https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions214.htm and https://my.vertica.com/docs/7.1.x/HTML/Content/Authoring/SQLReferenceManual/Functions/Mathematical/WIDTH_BUCKET.htm - they both mention support for datetime, interval, timestamp, etc. - is that something we want to support too? > Support the width_bucket function > - > > Key: HIVE-15982 > URL: https://issues.apache.org/jira/browse/HIVE-15982 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Sahil Takiar > > Support the width_bucket(wbo, wbb1, wbb2, wbc) which returns an integer > between 0 and wbc+1 by mapping wbo into the ith equally sized bucket made by > dividing wbb1 and wbb2 into equally sized regions. If wbo < wbb1, return 1, > if wbo > wbb2 return wbc+1. Reference: SQL standard section 4.4. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-15982) Support the width_bucket function
[ https://issues.apache.org/jira/browse/HIVE-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar reassigned HIVE-15982: --- Assignee: Sahil Takiar > Support the width_bucket function > - > > Key: HIVE-15982 > URL: https://issues.apache.org/jira/browse/HIVE-15982 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Sahil Takiar > > Support the width_bucket(wbo, wbb1, wbb2, wbc) which returns an integer > between 0 and wbc+1 by mapping wbo into the ith equally sized bucket made by > dividing wbb1 and wbb2 into equally sized regions. If wbo < wbb1, return 1, > if wbo > wbb2 return wbc+1. Reference: SQL standard section 4.4. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16403) LLAP UI shows the wrong number of executors
[ https://issues.apache.org/jira/browse/HIVE-16403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-16403: Resolution: Fixed Fix Version/s: 3.0.0 2.3.0 2.2.0 Status: Resolved (was: Patch Available) Committed to 4 different branches... sigh > LLAP UI shows the wrong number of executors > --- > > Key: HIVE-16403 > URL: https://issues.apache.org/jira/browse/HIVE-16403 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.2.0, 2.3.0, 3.0.0 > > Attachments: HIVE-16403.patch > > > Queued tasks are added twice. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16390) LLAP IO should take job config into account; also LLAP config should load defaults
[ https://issues.apache.org/jira/browse/HIVE-16390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964834#comment-15964834 ] Sergey Shelukhin commented on HIVE-16390: - I don't think it should make a difference cause we are only using it for ORC stuff pretty much (or other reader stuff). It's all local to one IO thread and specific to one reader/readerimpl. I looked thru OrcConf.java and didn't see anything else that would need an override. Let me know if you see smth. > LLAP IO should take job config into account; also LLAP config should load > defaults > -- > > Key: HIVE-16390 > URL: https://issues.apache.org/jira/browse/HIVE-16390 > Project: Hive > Issue Type: Bug >Reporter: Siddharth Seth >Assignee: Sergey Shelukhin > Attachments: HIVE-16390.patch > > > Ensure the config is used consistently with task-based execution by default; > the exceptions should be specific (settings we don't want overridden, like > zero-copy). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16307) add IO memory usage report to LLAP UI
[ https://issues.apache.org/jira/browse/HIVE-16307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-16307: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to master. Thanks for the review! I'll file a bug to do JSON conversion some day > add IO memory usage report to LLAP UI > - > > Key: HIVE-16307 > URL: https://issues.apache.org/jira/browse/HIVE-16307 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 3.0.0 > > Attachments: HIVE-16307.01.patch, HIVE-16307.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16419) Exclude hadoop related classes for JDBC stabdalone jar
[ https://issues.apache.org/jira/browse/HIVE-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964807#comment-15964807 ] Hive QA commented on HIVE-16419: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862898/HIVE-16419.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10570 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=143) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4641/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4641/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4641/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862898 - PreCommit-HIVE-Build > Exclude hadoop related classes for JDBC stabdalone jar > -- > > Key: HIVE-16419 > URL: https://issues.apache.org/jira/browse/HIVE-16419 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Tao Li >Assignee: Tao Li >Priority: Blocker > Attachments: HIVE-16419.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16387) Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
[ https://issues.apache.org/jira/browse/HIVE-16387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-16387: --- Issue Type: Sub-task (was: Bug) Parent: HIVE-11160 > Fix failing test org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > --- > > Key: HIVE-16387 > URL: https://issues.apache.org/jira/browse/HIVE-16387 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-16387.01.patch, HIVE-16387.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16388) LLAP: Log rotation for daemon, history and gc files
[ https://issues.apache.org/jira/browse/HIVE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964740#comment-15964740 ] Prasanth Jayachandran commented on HIVE-16388: -- [~sseth] does the new changes look good to you? > LLAP: Log rotation for daemon, history and gc files > --- > > Key: HIVE-16388 > URL: https://issues.apache.org/jira/browse/HIVE-16388 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 3.0.0 >Reporter: Siddharth Seth >Assignee: Prasanth Jayachandran > Attachments: HIVE-16388.1.patch, HIVE-16388.2.patch > > > GC logs need to be rotated by date. > LLAP daemon history logs as well > Ideally, the daemon.out file needs the same > Need to be able to download relevant logfiles for a time window. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16419) Exclude hadoop related classes for JDBC stabdalone jar
[ https://issues.apache.org/jira/browse/HIVE-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964737#comment-15964737 ] Tao Li commented on HIVE-16419: --- [~vgumashta] Can you also take a look at the patch? Thanks! > Exclude hadoop related classes for JDBC stabdalone jar > -- > > Key: HIVE-16419 > URL: https://issues.apache.org/jira/browse/HIVE-16419 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Tao Li >Assignee: Tao Li >Priority: Blocker > Attachments: HIVE-16419.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16386) Add debug logging to describe why runtime filtering semijoins are removed
[ https://issues.apache.org/jira/browse/HIVE-16386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-16386: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to master > Add debug logging to describe why runtime filtering semijoins are removed > - > > Key: HIVE-16386 > URL: https://issues.apache.org/jira/browse/HIVE-16386 > Project: Hive > Issue Type: Improvement > Components: Logging >Reporter: Jason Dere >Assignee: Jason Dere > Fix For: 3.0.0 > > Attachments: HIVE-16386.1.patch, HIVE-16386.2.patch > > > Add a few logging statements to detail the reason why semijoin optimizations > are being removed, which can help during debugging. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16316) Prepare master branch for 3.0.0 development.
[ https://issues.apache.org/jira/browse/HIVE-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964708#comment-15964708 ] Pengcheng Xiong commented on HIVE-16316: if we are going to have 2.4 or 2.5. we can always replace 3.0 with 2.4 or 2.5. But we need to do the upgrade one by one. > Prepare master branch for 3.0.0 development. > > > Key: HIVE-16316 > URL: https://issues.apache.org/jira/browse/HIVE-16316 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Fix For: 3.0.0 > > Attachments: HIVE-16316.patch > > > master branch is now being used for 3.0.0 development. The build files will > need to reflect this change. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HIVE-16316) Prepare master branch for 3.0.0 development.
[ https://issues.apache.org/jira/browse/HIVE-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964706#comment-15964706 ] Pengcheng Xiong edited comment on HIVE-16316 at 4/11/17 5:51 PM: - [~ngangam], 1. will not happen. 2. There will be a 2.2 and a 2.3 for sure. If you take a look at the upgrade.order.derby, you will see that they are upgraded one step by one step. Thus we should have {code} 2.2.0-to-2.3.0 2.3.0-to-3.0.0 {code} rather than {code} 2.2.0-to-3.0.0 {code} Could u correct it? Right now it is blocking some development from [~wzheng]. Thanks. was (Author: pxiong): [~ngangam], 1. will not happen. 2. There will be a 2.2 and a 2.3 for sure. If you take a look at the upgrade.order.derby, you will see that they are upgraded one by one. Thus we should have {code} 2.2.0-to-2.3.0 2.3.0-to-3.0.0 {code} rather than {code} 2.2.0-to-3.0.0 {code} Could u correct it? Right now it is blocking some development from [~wzheng]. > Prepare master branch for 3.0.0 development. > > > Key: HIVE-16316 > URL: https://issues.apache.org/jira/browse/HIVE-16316 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Fix For: 3.0.0 > > Attachments: HIVE-16316.patch > > > master branch is now being used for 3.0.0 development. The build files will > need to reflect this change. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16316) Prepare master branch for 3.0.0 development.
[ https://issues.apache.org/jira/browse/HIVE-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964706#comment-15964706 ] Pengcheng Xiong commented on HIVE-16316: [~ngangam], 1. will not happen. 2. There will be a 2.2 and a 2.3 for sure. If you take a look at the upgrade.order.derby, you will see that they are upgraded one by one. Thus we should have {code} 2.2.0-to-2.3.0 2.3.0-to-3.0.0 {code} rather than {code} 2.2.0-to-3.0.0 {code} Could u correct it? Right now it is blocking some development from [~wzheng]. > Prepare master branch for 3.0.0 development. > > > Key: HIVE-16316 > URL: https://issues.apache.org/jira/browse/HIVE-16316 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Fix For: 3.0.0 > > Attachments: HIVE-16316.patch > > > master branch is now being used for 3.0.0 development. The build files will > need to reflect this change. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15708) Upgrade calcite version to 1.12
[ https://issues.apache.org/jira/browse/HIVE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964700#comment-15964700 ] Remus Rusanu commented on HIVE-15708: - Both failures on patch 22 are pre-existing > Upgrade calcite version to 1.12 > --- > > Key: HIVE-15708 > URL: https://issues.apache.org/jira/browse/HIVE-15708 > Project: Hive > Issue Type: Task > Components: CBO, Logical Optimizer >Affects Versions: 2.2.0 >Reporter: Ashutosh Chauhan >Assignee: Remus Rusanu > Attachments: HIVE-15708.01.patch, HIVE-15708.02.patch, > HIVE-15708.03.patch, HIVE-15708.04.patch, HIVE-15708.05.patch, > HIVE-15708.06.patch, HIVE-15708.07.patch, HIVE-15708.08.patch, > HIVE-15708.09.patch, HIVE-15708.10.patch, HIVE-15708.11.patch, > HIVE-15708.12.patch, HIVE-15708.13.patch, HIVE-15708.14.patch, > HIVE-15708.15.patch, HIVE-15708.15.patch, HIVE-15708.16.patch, > HIVE-15708.17.patch, HIVE-15708.18.patch, HIVE-15708.19.patch, > HIVe-15708.20.patch, HIVE-15708.21.patch, HIVE-15708.22.patch > > > Currently we are on 1.10 Need to upgrade calcite version to 1.11 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16419) Exclude hadoop related classes for JDBC stabdalone jar
[ https://issues.apache.org/jira/browse/HIVE-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-16419: -- Status: Patch Available (was: Open) > Exclude hadoop related classes for JDBC stabdalone jar > -- > > Key: HIVE-16419 > URL: https://issues.apache.org/jira/browse/HIVE-16419 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Tao Li >Assignee: Tao Li >Priority: Blocker > Attachments: HIVE-16419.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16419) Exclude hadoop related classes for JDBC stabdalone jar
[ https://issues.apache.org/jira/browse/HIVE-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li reassigned HIVE-16419: - Assignee: Tao Li > Exclude hadoop related classes for JDBC stabdalone jar > -- > > Key: HIVE-16419 > URL: https://issues.apache.org/jira/browse/HIVE-16419 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Tao Li >Assignee: Tao Li >Priority: Blocker > Attachments: HIVE-16419.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16419) Exclude hadoop related classes for JDBC stabdalone jar
[ https://issues.apache.org/jira/browse/HIVE-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-16419: -- Attachment: HIVE-16419.1.patch > Exclude hadoop related classes for JDBC stabdalone jar > -- > > Key: HIVE-16419 > URL: https://issues.apache.org/jira/browse/HIVE-16419 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Tao Li >Priority: Blocker > Attachments: HIVE-16419.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16328) HoS: more aggressive mapjoin optimization when hive.spark.use.file.size.for.mapjoin is true
[ https://issues.apache.org/jira/browse/HIVE-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964678#comment-15964678 ] Chao Sun commented on HIVE-16328: - [~xuefuz] can you help to review this? thanks. > HoS: more aggressive mapjoin optimization when > hive.spark.use.file.size.for.mapjoin is true > --- > > Key: HIVE-16328 > URL: https://issues.apache.org/jira/browse/HIVE-16328 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Reporter: Chao Sun >Assignee: Chao Sun > Attachments: HIVE-16328.1.patch, HIVE-16328.2.patch > > > In HIVE-15489, when {{hive.spark.use.file.size.for.mapjoin}} is set to true, > and if the JOIN op has any upstream RS operator, then we will stop converting > the JOIN op to MAPJOIN op. > However, this is overly conservative. A better solution is to treat the > branch that has upstream RS as the big table and check if all other branches > are map-only AND can fit in hash table size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16419) Exclude hadoop related classes for JDBC stabdalone jar
[ https://issues.apache.org/jira/browse/HIVE-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964677#comment-15964677 ] Tao Li commented on HIVE-16419: --- In HIVE-14837, we were trying to shade the hadoop core classes into the JDBC standalone jar so that the JDBC program does not need to specify the hadoop dependencies. However later on we found some issues with it which was hard to tackle with, e.g. the JDBC program using core-site.xml which contains hard-coded class names. So the best solution would be not including the hadoop classes and ask the user to explicitly specify it. cc [~thejas], [~pxiong] > Exclude hadoop related classes for JDBC stabdalone jar > -- > > Key: HIVE-16419 > URL: https://issues.apache.org/jira/browse/HIVE-16419 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Tao Li >Priority: Blocker > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15708) Upgrade calcite version to 1.12
[ https://issues.apache.org/jira/browse/HIVE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964655#comment-15964655 ] Hive QA commented on HIVE-15708: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862880/HIVE-15708.22.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10570 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=143) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4640/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4640/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4640/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862880 - PreCommit-HIVE-Build > Upgrade calcite version to 1.12 > --- > > Key: HIVE-15708 > URL: https://issues.apache.org/jira/browse/HIVE-15708 > Project: Hive > Issue Type: Task > Components: CBO, Logical Optimizer >Affects Versions: 2.2.0 >Reporter: Ashutosh Chauhan >Assignee: Remus Rusanu > Attachments: HIVE-15708.01.patch, HIVE-15708.02.patch, > HIVE-15708.03.patch, HIVE-15708.04.patch, HIVE-15708.05.patch, > HIVE-15708.06.patch, HIVE-15708.07.patch, HIVE-15708.08.patch, > HIVE-15708.09.patch, HIVE-15708.10.patch, HIVE-15708.11.patch, > HIVE-15708.12.patch, HIVE-15708.13.patch, HIVE-15708.14.patch, > HIVE-15708.15.patch, HIVE-15708.15.patch, HIVE-15708.16.patch, > HIVE-15708.17.patch, HIVE-15708.18.patch, HIVE-15708.19.patch, > HIVe-15708.20.patch, HIVE-15708.21.patch, HIVE-15708.22.patch > > > Currently we are on 1.10 Need to upgrade calcite version to 1.11 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16415) Add blobstore tests for insertion of zero rows
[ https://issues.apache.org/jira/browse/HIVE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964616#comment-15964616 ] Thomas Poepping commented on HIVE-16415: This test specifically addresses a NullPointerException caused when inserting zero rows that our team found before I got here. It seemed prudent to include it as a regression test into the current Hive distribution. > Add blobstore tests for insertion of zero rows > -- > > Key: HIVE-16415 > URL: https://issues.apache.org/jira/browse/HIVE-16415 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 2.1.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Attachments: HIVE-16415.patch > > > This patch introduces two regression tests into the hive-blobstore qtest > module: zero_rows_hdfs.q and zero_rows_blobstore.q. These test doing INSERT > commands with a WHERE clause where the condition of the WHERE clause causes > zero rows to be considered. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-11418) Dropping a database in an encryption zone with CASCADE and trash enabled fails
[ https://issues.apache.org/jira/browse/HIVE-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964605#comment-15964605 ] Ashutosh Chauhan commented on HIVE-11418: - I think removing PURGE is a right choice, since its a non-standard sql and now with underlying bug has no reason to exist. > Dropping a database in an encryption zone with CASCADE and trash enabled fails > -- > > Key: HIVE-11418 > URL: https://issues.apache.org/jira/browse/HIVE-11418 > Project: Hive > Issue Type: Sub-task >Affects Versions: 1.2.0 >Reporter: Sergio Peña >Assignee: Sahil Takiar > Attachments: HIVE-11418.1.patch, HIVE-11418.2.patch > > > Here's the query that fails: > {noformat} > hive> CREATE DATABASE db; > hive> USE db; > hive> CREATE TABLE a(id int); > hive> SET fs.trash.interval=1; > hive> DROP DATABASE db CASCADE; > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to drop > db.a because it is in an encryption zone and trash > is enabled. Use PURGE option to skip trash.) > {noformat} > DROP DATABASE does not support PURGE, so we have to remove the tables one by > one, and then drop the database. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16415) Add blobstore tests for insertion of zero rows
[ https://issues.apache.org/jira/browse/HIVE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964531#comment-15964531 ] Sergio Peña commented on HIVE-16415: Thanks [~poeppt]. The patch looks good. Just one question, is there an error on writing zero rows to blobstore? this patch only adds the test cases and they pass, but is there a patch that fixes an issue or you found this issue in an older hive version? > Add blobstore tests for insertion of zero rows > -- > > Key: HIVE-16415 > URL: https://issues.apache.org/jira/browse/HIVE-16415 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 2.1.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Attachments: HIVE-16415.patch > > > This patch introduces two regression tests into the hive-blobstore qtest > module: zero_rows_hdfs.q and zero_rows_blobstore.q. These test doing INSERT > commands with a WHERE clause where the condition of the WHERE clause causes > zero rows to be considered. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15708) Upgrade calcite version to 1.12
[ https://issues.apache.org/jira/browse/HIVE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Remus Rusanu updated HIVE-15708: Attachment: HIVE-15708.22.patch .22.patch extends the point lookup optimization to JOIN conditions > Upgrade calcite version to 1.12 > --- > > Key: HIVE-15708 > URL: https://issues.apache.org/jira/browse/HIVE-15708 > Project: Hive > Issue Type: Task > Components: CBO, Logical Optimizer >Affects Versions: 2.2.0 >Reporter: Ashutosh Chauhan >Assignee: Remus Rusanu > Attachments: HIVE-15708.01.patch, HIVE-15708.02.patch, > HIVE-15708.03.patch, HIVE-15708.04.patch, HIVE-15708.05.patch, > HIVE-15708.06.patch, HIVE-15708.07.patch, HIVE-15708.08.patch, > HIVE-15708.09.patch, HIVE-15708.10.patch, HIVE-15708.11.patch, > HIVE-15708.12.patch, HIVE-15708.13.patch, HIVE-15708.14.patch, > HIVE-15708.15.patch, HIVE-15708.15.patch, HIVE-15708.16.patch, > HIVE-15708.17.patch, HIVE-15708.18.patch, HIVE-15708.19.patch, > HIVe-15708.20.patch, HIVE-15708.21.patch, HIVE-15708.22.patch > > > Currently we are on 1.10 Need to upgrade calcite version to 1.11 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-11418) Dropping a database in an encryption zone with CASCADE and trash enabled fails
[ https://issues.apache.org/jira/browse/HIVE-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964491#comment-15964491 ] Sergio Peña commented on HIVE-11418: +1 The patch looks good [~stakiar]. Regarding branch-2, yes we should fix it, but I don't know if adding a PURGE keyword to the DROP DATABASE would be ideal now that we know Hadoop 2.8 fixes this. [~ashutoshc] What do you think on adding the PURGE keyword on DROP DATABASE? Would this be useful on Hive to avoid sending the whole DB to the trash? > Dropping a database in an encryption zone with CASCADE and trash enabled fails > -- > > Key: HIVE-11418 > URL: https://issues.apache.org/jira/browse/HIVE-11418 > Project: Hive > Issue Type: Sub-task >Affects Versions: 1.2.0 >Reporter: Sergio Peña >Assignee: Sahil Takiar > Attachments: HIVE-11418.1.patch, HIVE-11418.2.patch > > > Here's the query that fails: > {noformat} > hive> CREATE DATABASE db; > hive> USE db; > hive> CREATE TABLE a(id int); > hive> SET fs.trash.interval=1; > hive> DROP DATABASE db CASCADE; > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to drop > db.a because it is in an encryption zone and trash > is enabled. Use PURGE option to skip trash.) > {noformat} > DROP DATABASE does not support PURGE, so we have to remove the tables one by > one, and then drop the database. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-11418) Dropping a database in an encryption zone with CASCADE and trash enabled fails
[ https://issues.apache.org/jira/browse/HIVE-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964492#comment-15964492 ] Sergio Peña commented on HIVE-11418: +1 The patch looks good [~stakiar]. Regarding branch-2, yes we should fix it, but I don't know if adding a PURGE keyword to the DROP DATABASE would be ideal now that we know Hadoop 2.8 fixes this. [~ashutoshc] What do you think on adding the PURGE keyword on DROP DATABASE? Would this be useful on Hive to avoid sending the whole DB to the trash? > Dropping a database in an encryption zone with CASCADE and trash enabled fails > -- > > Key: HIVE-11418 > URL: https://issues.apache.org/jira/browse/HIVE-11418 > Project: Hive > Issue Type: Sub-task >Affects Versions: 1.2.0 >Reporter: Sergio Peña >Assignee: Sahil Takiar > Attachments: HIVE-11418.1.patch, HIVE-11418.2.patch > > > Here's the query that fails: > {noformat} > hive> CREATE DATABASE db; > hive> USE db; > hive> CREATE TABLE a(id int); > hive> SET fs.trash.interval=1; > hive> DROP DATABASE db CASCADE; > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to drop > db.a because it is in an encryption zone and trash > is enabled. Use PURGE option to skip trash.) > {noformat} > DROP DATABASE does not support PURGE, so we have to remove the tables one by > one, and then drop the database. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16044) LLAP: Shuffle Handler keep-alive connections are closed from the server side
[ https://issues.apache.org/jira/browse/HIVE-16044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964489#comment-15964489 ] Gopal V commented on HIVE-16044: [~rajesh.balamohan]: +1 > LLAP: Shuffle Handler keep-alive connections are closed from the server side > > > Key: HIVE-16044 > URL: https://issues.apache.org/jira/browse/HIVE-16044 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-16044.1.patch, HIVE-16044.2.patch, > wihtoutPatch_Llap_shuffleHandler.png, withPatch_llap_shuffleHanlder.png > > > LLAP's shufflehandler could be closing the keep-alive connections after > output is served. This could break the connection from server side. JDK http > logs may not be revealing this. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15982) Support the width_bucket function
[ https://issues.apache.org/jira/browse/HIVE-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964466#comment-15964466 ] Carter Shanklin commented on HIVE-15982: [~stakiar] no problem at all, thanks for looking into this > Support the width_bucket function > - > > Key: HIVE-15982 > URL: https://issues.apache.org/jira/browse/HIVE-15982 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin > > Support the width_bucket(wbo, wbb1, wbb2, wbc) which returns an integer > between 0 and wbc+1 by mapping wbo into the ith equally sized bucket made by > dividing wbb1 and wbb2 into equally sized regions. If wbo < wbb1, return 1, > if wbo > wbb2 return wbc+1. Reference: SQL standard section 4.4. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16418) Allow HiveKey to skip some bytes for comparison
[ https://issues.apache.org/jira/browse/HIVE-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li reassigned HIVE-16418: - > Allow HiveKey to skip some bytes for comparison > --- > > Key: HIVE-16418 > URL: https://issues.apache.org/jira/browse/HIVE-16418 > Project: Hive > Issue Type: New Feature >Reporter: Rui Li >Assignee: Rui Li > > The feature is required when we have to serialize some fields and prevent > them from being used in comparison, e.g. HIVE-14412. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16394) HoS does not support queue name change in middle of session
[ https://issues.apache.org/jira/browse/HIVE-16394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964399#comment-15964399 ] Chaoyu Tang commented on HIVE-16394: Thanks [~leftylev]. This property is not HoS specific and already works in HoMR, so I think it is not needed to be documented separately. > HoS does not support queue name change in middle of session > --- > > Key: HIVE-16394 > URL: https://issues.apache.org/jira/browse/HIVE-16394 > Project: Hive > Issue Type: Bug >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Fix For: 3.0.0 > > Attachments: HIVE-16394.patch > > > The mapreduce.job.queuename only effects when HoS executes its query first > time. After that, changing mapreduce.job.queuename won't change the query > yarn scheduler queue name. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15442) Driver.java has a redundancy code
[ https://issues.apache.org/jira/browse/HIVE-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964377#comment-15964377 ] Aihua Xu commented on HIVE-15442: - Yeah. It does look redundant. +1. I will need to commit your change after 1 day. > Driver.java has a redundancy code > -- > > Key: HIVE-15442 > URL: https://issues.apache.org/jira/browse/HIVE-15442 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Minor > Attachments: HIVE-15442.1.patch > > > Driver.java has a redundancy code about "explain output", i think the if > statement " if (conf.getBoolVar(ConfVars.HIVE_LOG_EXPLAIN_OUTPUT))" has a > repeat judge with the above statement. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16340) Allow Kerberos + SSL connections to HMS
[ https://issues.apache.org/jira/browse/HIVE-16340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964357#comment-15964357 ] Aihua Xu commented on HIVE-16340: - Yes. This should be documented. [~stakiar] can you update the wiki? > Allow Kerberos + SSL connections to HMS > --- > > Key: HIVE-16340 > URL: https://issues.apache.org/jira/browse/HIVE-16340 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Fix For: 3.0.0 > > Attachments: HIVE-16340.1.patch, HIVE-16340.2.patch, > HIVE-16340.3.patch > > > It should be possible to connect to HMS with Kerberos authentication and SSL > enabled, at the same time. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15708) Upgrade calcite version to 1.12
[ https://issues.apache.org/jira/browse/HIVE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964289#comment-15964289 ] Remus Rusanu commented on HIVE-15708: - The RB should be updated now. > Upgrade calcite version to 1.12 > --- > > Key: HIVE-15708 > URL: https://issues.apache.org/jira/browse/HIVE-15708 > Project: Hive > Issue Type: Task > Components: CBO, Logical Optimizer >Affects Versions: 2.2.0 >Reporter: Ashutosh Chauhan >Assignee: Remus Rusanu > Attachments: HIVE-15708.01.patch, HIVE-15708.02.patch, > HIVE-15708.03.patch, HIVE-15708.04.patch, HIVE-15708.05.patch, > HIVE-15708.06.patch, HIVE-15708.07.patch, HIVE-15708.08.patch, > HIVE-15708.09.patch, HIVE-15708.10.patch, HIVE-15708.11.patch, > HIVE-15708.12.patch, HIVE-15708.13.patch, HIVE-15708.14.patch, > HIVE-15708.15.patch, HIVE-15708.15.patch, HIVE-15708.16.patch, > HIVE-15708.17.patch, HIVE-15708.18.patch, HIVE-15708.19.patch, > HIVe-15708.20.patch, HIVE-15708.21.patch > > > Currently we are on 1.10 Need to upgrade calcite version to 1.11 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15442) Driver.java has a redundancy code
[ https://issues.apache.org/jira/browse/HIVE-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964227#comment-15964227 ] Hive QA commented on HIVE-15442: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12862800/HIVE-15442.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10570 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=143) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4639/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4639/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4639/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12862800 - PreCommit-HIVE-Build > Driver.java has a redundancy code > -- > > Key: HIVE-15442 > URL: https://issues.apache.org/jira/browse/HIVE-15442 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Minor > Attachments: HIVE-15442.1.patch > > > Driver.java has a redundancy code about "explain output", i think the if > statement " if (conf.getBoolVar(ConfVars.HIVE_LOG_EXPLAIN_OUTPUT))" has a > repeat judge with the above statement. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HIVE-15442) Driver.java has a redundancy code
[ https://issues.apache.org/jira/browse/HIVE-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964171#comment-15964171 ] Saijin Huang edited comment on HIVE-15442 at 4/11/17 11:28 AM: --- hello,[~aihuaxu] ,i have attach a patch and opened a pull request.Can you take a quick review? was (Author: txhsj): hello,[~aihuaxu] ,i have opened a pull request.Can you take a quick review? > Driver.java has a redundancy code > -- > > Key: HIVE-15442 > URL: https://issues.apache.org/jira/browse/HIVE-15442 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Minor > Attachments: HIVE-15442.1.patch > > > Driver.java has a redundancy code about "explain output", i think the if > statement " if (conf.getBoolVar(ConfVars.HIVE_LOG_EXPLAIN_OUTPUT))" has a > repeat judge with the above statement. -- This message was sent by Atlassian JIRA (v6.3.15#6346)