[jira] [Updated] (HIVE-15013) Config dir generated for tests should not be under the test tmp directory

2016-10-19 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-15013:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

> Config dir generated for tests should not be under the test tmp directory
> -
>
> Key: HIVE-15013
> URL: https://issues.apache.org/jira/browse/HIVE-15013
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.2.0
>
> Attachments: HIVE-15013.01.patch, HIVE-15013.02.patch
>
>
> mvn is used to clean up tmp directories created for tests, and to setup the 
> config directory. The current structure is 
> target/tmp
> target/tmp/config
> All of this is setup when mvn test is executed.
> Tests generate data under tmp - warehouse, metastore, etc. Having the conf 
> dir there (generated by mvn) makes it complicate to add per test cleanup - 
> since the entire tmp directory cannot be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15013) Config dir generated for tests should not be under the test tmp directory

2016-10-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15591007#comment-15591007
 ] 

Siddharth Seth commented on HIVE-15013:
---

Test failures are unrelated (HIVE-14964). Committing. Thanks for the review.

> Config dir generated for tests should not be under the test tmp directory
> -
>
> Key: HIVE-15013
> URL: https://issues.apache.org/jira/browse/HIVE-15013
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15013.01.patch, HIVE-15013.02.patch
>
>
> mvn is used to clean up tmp directories created for tests, and to setup the 
> config directory. The current structure is 
> target/tmp
> target/tmp/config
> All of this is setup when mvn test is executed.
> Tests generate data under tmp - warehouse, metastore, etc. Having the conf 
> dir there (generated by mvn) makes it complicate to add per test cleanup - 
> since the entire tmp directory cannot be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14887) Reduce the memory requirements for tests

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590990#comment-15590990
 ] 

Hive QA commented on HIVE-14887:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834303/HIVE-14887.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=91)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=90)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
org.apache.hive.spark.client.TestSparkClient.testJobSubmission (batchId=265)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1684/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1684/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1684/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834303 - PreCommit-HIVE-Build

> Reduce the memory requirements for tests
> 
>
> Key: HIVE-14887
> URL: https://issues.apache.org/jira/browse/HIVE-14887
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14887.01.patch, HIVE-14887.02.patch, 
> HIVE-14887.03.patch, HIVE-14887.04.patch
>
>
> The clusters that we spin up end up requiring 16GB at times. Also the maven 
> arguments seem a little heavy weight.
> Reducing this will allow for additional ptest drones per box, which should 
> bring down the runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15013) Config dir generated for tests should not be under the test tmp directory

2016-10-19 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590986#comment-15590986
 ] 

Prasanth Jayachandran commented on HIVE-15013:
--

lgtm, +1

> Config dir generated for tests should not be under the test tmp directory
> -
>
> Key: HIVE-15013
> URL: https://issues.apache.org/jira/browse/HIVE-15013
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15013.01.patch, HIVE-15013.02.patch
>
>
> mvn is used to clean up tmp directories created for tests, and to setup the 
> config directory. The current structure is 
> target/tmp
> target/tmp/config
> All of this is setup when mvn test is executed.
> Tests generate data under tmp - warehouse, metastore, etc. Having the conf 
> dir there (generated by mvn) makes it complicate to add per test cleanup - 
> since the entire tmp directory cannot be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-14835) Improve ptest2 build time

2016-10-19 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran resolved HIVE-14835.
--
Resolution: Fixed

> Improve ptest2 build time
> -
>
> Key: HIVE-14835
> URL: https://issues.apache.org/jira/browse/HIVE-14835
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 2.2.0
>
> Attachments: HIVE-14835.1.patch, HIVE-14835.2.patch
>
>
> NO PRECOMMIT TESTS
> 2 things can be improved
> 1) ptest2 always downloads jars for compiling its own directory which takes 
> about 1m30s which should take only 5s with cache jars. The reason for that is 
> maven.repo.local is pointing to a path under WORKSPACE which will be cleaned 
> by jenkins for every run.
> 2) For hive build we can make use of parallel build and quite the output of 
> build which should shave off another 15-30s. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14835) Improve ptest2 build time

2016-10-19 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-14835:
-
Attachment: HIVE-14835.2.patch

Thanks [~leftylev] for catching that!

I committed the .2 version of the patch a while back but forgot to resolve the 
jira. Resolving it now. 

> Improve ptest2 build time
> -
>
> Key: HIVE-14835
> URL: https://issues.apache.org/jira/browse/HIVE-14835
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 2.2.0
>
> Attachments: HIVE-14835.1.patch, HIVE-14835.2.patch
>
>
> NO PRECOMMIT TESTS
> 2 things can be improved
> 1) ptest2 always downloads jars for compiling its own directory which takes 
> about 1m30s which should take only 5s with cache jars. The reason for that is 
> maven.repo.local is pointing to a path under WORKSPACE which will be cleaned 
> by jenkins for every run.
> 2) For hive build we can make use of parallel build and quite the output of 
> build which should shave off another 15-30s. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14835) Improve ptest2 build time

2016-10-19 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590922#comment-15590922
 ] 

Lefty Leverenz commented on HIVE-14835:
---

Changes to source-prep.vm went in again with commit 
535316187f9451f11ac1cfbe7d6d66f61f2ee6d8.  Should the status be updated now, or 
is there more work pending?

> Improve ptest2 build time
> -
>
> Key: HIVE-14835
> URL: https://issues.apache.org/jira/browse/HIVE-14835
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 2.2.0
>
> Attachments: HIVE-14835.1.patch
>
>
> NO PRECOMMIT TESTS
> 2 things can be improved
> 1) ptest2 always downloads jars for compiling its own directory which takes 
> about 1m30s which should take only 5s with cache jars. The reason for that is 
> maven.repo.local is pointing to a path under WORKSPACE which will be cleaned 
> by jenkins for every run.
> 2) For hive build we can make use of parallel build and quite the output of 
> build which should shave off another 15-30s. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590903#comment-15590903
 ] 

Hive QA commented on HIVE-14993:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834311/HIVE-14993.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 32 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_00_nonpart_empty] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_01_nonpart] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_02_part] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_03_nonpart_over_compat]
 (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_04_all_part] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_04_evolved_parts] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_05_some_part] 
(batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_06_one_part] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_07_all_part_over_nonoverlap]
 (batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_08_nonpart_rename] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_09_part_spec_nonoverlap]
 (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_10_external_managed]
 (batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_12_external_location]
 (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_13_managed_location]
 (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_14_managed_location_over_existing]
 (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_16_part_external] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_17_part_managed] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_19_00_part_external_location]
 (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_19_part_external_location]
 (batchId=25)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_20_part_managed_location]
 (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_22_import_exist_authsuccess]
 (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_23_import_part_authsuccess]
 (batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_24_import_nonexist_authsuccess]
 (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_hidden_files] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_2_exim_basic] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_3_exim_metadata] 
(batchId=51)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[import_exported_table]
 (batchId=134)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[import_exported_table]
 (batchId=220)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1683/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1683/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1683/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 32 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834311 - PreCommit-HIVE-Build

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, 
> HIVE-14993.4.patch, HIVE-14993.5.patch, HIVE-14993.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15006) Flaky test: TestBeelineWithHS2ConnectionFile

2016-10-19 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-15006:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

> Flaky test: TestBeelineWithHS2ConnectionFile
> 
>
> Key: HIVE-15006
> URL: https://issues.apache.org/jira/browse/HIVE-15006
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.2.0
>
> Attachments: HIVE-15006.01.patch, HIVE-15006.02.patch
>
>
> Seems to time out fairly often.
> https://issues.apache.org/jira/browse/HIVE-14391, 
> https://builds.apache.org/job/PreCommit-HIVE-Build/1621/testReport
> https://issues.apache.org/jira/browse/HIVE-14887, 
> https://builds.apache.org/job/PreCommit-HIVE-Build/1606/testReport



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15006) Flaky test: TestBeelineWithHS2ConnectionFile

2016-10-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590845#comment-15590845
 ] 

Siddharth Seth commented on HIVE-15006:
---

HIVE-14964 account for the test failures. Committing.

> Flaky test: TestBeelineWithHS2ConnectionFile
> 
>
> Key: HIVE-15006
> URL: https://issues.apache.org/jira/browse/HIVE-15006
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15006.01.patch, HIVE-15006.02.patch
>
>
> Seems to time out fairly often.
> https://issues.apache.org/jira/browse/HIVE-14391, 
> https://builds.apache.org/job/PreCommit-HIVE-Build/1621/testReport
> https://issues.apache.org/jira/browse/HIVE-14887, 
> https://builds.apache.org/job/PreCommit-HIVE-Build/1606/testReport



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15006) Flaky test: TestBeelineWithHS2ConnectionFile

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590818#comment-15590818
 ] 

Hive QA commented on HIVE-15006:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834302/HIVE-15006.02.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1682/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1682/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1682/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834302 - PreCommit-HIVE-Build

> Flaky test: TestBeelineWithHS2ConnectionFile
> 
>
> Key: HIVE-15006
> URL: https://issues.apache.org/jira/browse/HIVE-15006
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15006.01.patch, HIVE-15006.02.patch
>
>
> Seems to time out fairly often.
> https://issues.apache.org/jira/browse/HIVE-14391, 
> https://builds.apache.org/job/PreCommit-HIVE-Build/1621/testReport
> https://issues.apache.org/jira/browse/HIVE-14887, 
> https://builds.apache.org/job/PreCommit-HIVE-Build/1606/testReport



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590748#comment-15590748
 ] 

Hive QA commented on HIVE-14993:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834311/HIVE-14993.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 32 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_00_nonpart_empty] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_01_nonpart] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_02_part] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_03_nonpart_over_compat]
 (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_04_all_part] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_04_evolved_parts] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_05_some_part] 
(batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_06_one_part] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_07_all_part_over_nonoverlap]
 (batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_08_nonpart_rename] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_09_part_spec_nonoverlap]
 (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_10_external_managed]
 (batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_12_external_location]
 (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_13_managed_location]
 (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_14_managed_location_over_existing]
 (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_16_part_external] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_17_part_managed] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_19_00_part_external_location]
 (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_19_part_external_location]
 (batchId=25)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_20_part_managed_location]
 (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_22_import_exist_authsuccess]
 (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_23_import_part_authsuccess]
 (batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_24_import_nonexist_authsuccess]
 (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_hidden_files] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_2_exim_basic] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_3_exim_metadata] 
(batchId=51)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[import_exported_table]
 (batchId=134)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[import_exported_table]
 (batchId=220)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1681/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1681/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1681/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 32 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834311 - PreCommit-HIVE-Build

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, 
> HIVE-14993.4.patch, HIVE-14993.5.patch, HIVE-14993.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14872) Remove the configuration HIVE_SUPPORT_SQL11_RESERVED_KEYWORDS

2016-10-19 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-14872:
--
Hadoop Flags: Incompatible change

> Remove the configuration HIVE_SUPPORT_SQL11_RESERVED_KEYWORDS
> -
>
> Key: HIVE-14872
> URL: https://issues.apache.org/jira/browse/HIVE-14872
> Project: Hive
>  Issue Type: Sub-task
>  Components: Parser
>Affects Versions: 2.1.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-14872.01.patch, HIVE-14872.02.patch
>
>
> The main purpose for the configuration of 
> HIVE_SUPPORT_SQL11_RESERVED_KEYWORDS is for backward compatibility because a 
> lot of reserved key words has been used as identifiers in the previous 
> releases. We already have had several releases with this configuration. Now 
> when I tried to add new set operators to the parser, ANTLR is always 
> complaining "code too large". I think it is time to remove this 
> configuration. (1) It will simplify the parser logic and largely reduce the 
> size of generated parser code; (2) it leave space for new features, 
> especially those which require parser changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14887) Reduce the memory requirements for tests

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590660#comment-15590660
 ] 

Hive QA commented on HIVE-14887:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834303/HIVE-14887.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=91)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1680/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1680/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1680/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834303 - PreCommit-HIVE-Build

> Reduce the memory requirements for tests
> 
>
> Key: HIVE-14887
> URL: https://issues.apache.org/jira/browse/HIVE-14887
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14887.01.patch, HIVE-14887.02.patch, 
> HIVE-14887.03.patch, HIVE-14887.04.patch
>
>
> The clusters that we spin up end up requiring 16GB at times. Also the maven 
> arguments seem a little heavy weight.
> Reducing this will allow for additional ptest drones per box, which should 
> bring down the runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14993:
--
Attachment: (was: HIVE-14855.5.patch)

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, 
> HIVE-14993.4.patch, HIVE-14993.5.patch, HIVE-14993.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14993:
--
Attachment: HIVE-14993.5.patch

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, 
> HIVE-14993.4.patch, HIVE-14993.5.patch, HIVE-14993.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590582#comment-15590582
 ] 

Hive QA commented on HIVE-14993:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834296/HIVE-14855.5.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10600 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_bulk] 
(batchId=89)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
org.apache.hive.spark.client.TestSparkClient.testJobSubmission (batchId=265)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1679/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1679/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1679/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834296 - PreCommit-HIVE-Build

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14855.5.patch, HIVE-14993.2.patch, 
> HIVE-14993.3.patch, HIVE-14993.4.patch, HIVE-14993.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13589) beeline support prompt for password with '-p' option

2016-10-19 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-13589:
---
Summary: beeline support prompt for password with '-p' option  (was: 
beeline - support prompt for password with '-u' option)

> beeline support prompt for password with '-p' option
> 
>
> Key: HIVE-13589
> URL: https://issues.apache.org/jira/browse/HIVE-13589
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Thejas M Nair
>Assignee: Vihang Karajgaonkar
> Fix For: 2.2.0
>
> Attachments: HIVE-13589.1.patch, HIVE-13589.2.patch, 
> HIVE-13589.3.patch, HIVE-13589.4.patch, HIVE-13589.5.patch, 
> HIVE-13589.6.patch, HIVE-13589.7.patch, HIVE-13589.8.patch, HIVE-13589.9.patch
>
>
> Specifying connection string using commandline options in beeline is 
> convenient, as it gets saved in shell command history, and it is easy to 
> retrieve it from there.
> However, specifying the password in command prompt is not secure as it gets 
> displayed on screen and saved in the history.
> It should be possible to specify '-p' without an argument to make beeline 
> prompt for password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14887) Reduce the memory requirements for tests

2016-10-19 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-14887:
--
Attachment: HIVE-14887.04.patch

Updated patch.

> Reduce the memory requirements for tests
> 
>
> Key: HIVE-14887
> URL: https://issues.apache.org/jira/browse/HIVE-14887
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14887.01.patch, HIVE-14887.02.patch, 
> HIVE-14887.03.patch, HIVE-14887.04.patch
>
>
> The clusters that we spin up end up requiring 16GB at times. Also the maven 
> arguments seem a little heavy weight.
> Reducing this will allow for additional ptest drones per box, which should 
> bring down the runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15006) Flaky test: TestBeelineWithHS2ConnectionFile

2016-10-19 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-15006:
--
Attachment: HIVE-15006.02.patch

Updated based on HIVE-15008. That should fix the new failures which are based 
on ordering of batches.

> Flaky test: TestBeelineWithHS2ConnectionFile
> 
>
> Key: HIVE-15006
> URL: https://issues.apache.org/jira/browse/HIVE-15006
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15006.01.patch, HIVE-15006.02.patch
>
>
> Seems to time out fairly often.
> https://issues.apache.org/jira/browse/HIVE-14391, 
> https://builds.apache.org/job/PreCommit-HIVE-Build/1621/testReport
> https://issues.apache.org/jira/browse/HIVE-14887, 
> https://builds.apache.org/job/PreCommit-HIVE-Build/1606/testReport



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15008) Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode

2016-10-19 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-15008:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

> Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode
> 
>
> Key: HIVE-15008
> URL: https://issues.apache.org/jira/browse/HIVE-15008
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.2.0
>
> Attachments: HIVE-15008.01.patch, HIVE-15008.02.patch, 
> HIVE-15008.03.patch, HIVE-15008.04.patch, HIVE-15008.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15008) Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode

2016-10-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590506#comment-15590506
 ] 

Siddharth Seth commented on HIVE-15008:
---

HIVE-14964, HIVE-15006 cover the test failures. Committing. Thanks for the 
review.

> Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode
> 
>
> Key: HIVE-15008
> URL: https://issues.apache.org/jira/browse/HIVE-15008
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15008.01.patch, HIVE-15008.02.patch, 
> HIVE-15008.03.patch, HIVE-15008.04.patch, HIVE-15008.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15008) Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590502#comment-15590502
 ] 

Hive QA commented on HIVE-15008:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834274/HIVE-15008.05.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1678/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1678/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1678/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834274 - PreCommit-HIVE-Build

> Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode
> 
>
> Key: HIVE-15008
> URL: https://issues.apache.org/jira/browse/HIVE-15008
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15008.01.patch, HIVE-15008.02.patch, 
> HIVE-15008.03.patch, HIVE-15008.04.patch, HIVE-15008.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14993:
--
Attachment: HIVE-14855.5.patch

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14855.5.patch, HIVE-14993.2.patch, 
> HIVE-14993.3.patch, HIVE-14993.4.patch, HIVE-14993.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14679) csv2/tsv2 output format disables quoting by default and it's difficult to enable

2016-10-19 Thread Jianguo Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianguo Tian updated HIVE-14679:

Attachment: HIVE-14769.1.patch

> csv2/tsv2 output format disables quoting by default and it's difficult to 
> enable
> 
>
> Key: HIVE-14679
> URL: https://issues.apache.org/jira/browse/HIVE-14679
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Jianguo Tian
> Attachments: HIVE-14769.1.patch
>
>
> Over in HIVE-9788 we made quoting optional for csv2/tsv2.
> However I see the following issues:
> * JIRA doc doesn't mention it's disabled by default, this should be there an 
> in the output of beeline help.
> * The JIRA says the property is {{--disableQuotingForSV}} but it's actually a 
> system property. We should not use a system property as it's non-standard so 
> extremely hard for users to set. For example I must do: {{env 
> HADOOP_CLIENT_OPTS="-Ddisable.quoting.for.sv=false" beeline ...}}
> * The arg {{--disableQuotingForSV}} should be documented in beeline help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14993:
--
Attachment: HIVE-14993.4.patch

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, 
> HIVE-14993.4.patch, HIVE-14993.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15019) handle import for MM tables

2016-10-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-15019:

Status: Patch Available  (was: Open)

> handle import for MM tables
> ---
>
> Key: HIVE-15019
> URL: https://issues.apache.org/jira/browse/HIVE-15019
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-15019.WIP.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15019) handle import for MM tables

2016-10-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-15019:

Attachment: HIVE-15019.WIP.patch

The only test case that seems to be broken is importing a MM table dump into an 
empty non-MM table without partitions (works with partitions...). Probably some 
of the myriad of paths is incorrectly set in the pipeline, will fix tomorrow.

Another big consideration is that export of a MM table doesn't consider MM 
state. Not sure if this JIRA will address that, or if that will be left for 
ACID to implement after migration. I'll see if an easy solution is possible.

> handle import for MM tables
> ---
>
> Key: HIVE-15019
> URL: https://issues.apache.org/jira/browse/HIVE-15019
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-15019.WIP.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14828) Cloud/S3: Stats publishing should be on HDFS instead of S3

2016-10-19 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-14828:

Attachment: HIVE-14828.1.patch

it is better to have use hdfs for stats file. rebasing for master branch.

> Cloud/S3: Stats publishing should be on HDFS instead of S3
> --
>
> Key: HIVE-14828
> URL: https://issues.apache.org/jira/browse/HIVE-14828
> Project: Hive
>  Issue Type: Improvement
>  Components: Statistics
>Affects Versions: 1.2.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 1.3.0
>
> Attachments: HIVE-14828.1.patch, HIVE-14828.branch-1.2.001.patch, 
> HIVE-14828.branch-2.0.001.patch
>
>
> Currently, stats files are created in S3. Later as a part of 
> FSStatsAggregator, it reads this file and populates MS again.
> {noformat}
> 2016-09-23 05:57:46,772 INFO  [main]: fs.FSStatsPublisher 
> (FSStatsPublisher.java:init(49)) - created : 
> s3a://BUCKET/test/.hive-staging_hive_2016-09-23_05-57-34_309_2648485988937054815-1/-ext-10001
> 2016-09-23 05:57:46,773 DEBUG [main]: fs.FSStatsAggregator 
> (FSStatsAggregator.java:connect(53)) - About to read stats from : 
> s3a://BUCKET/test/.hive-staging_hive_2016-09-23_05-57-34_309_2648485988937054815-1/-ext-10001
> {noformat}
> Instead of this, stats can be written directly on to HDFS and read locally 
> instead of S3, which would help in reducing couple of calls to S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14837) JDBC: standalone jar is missing hadoop core dependencies

2016-10-19 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590288#comment-15590288
 ] 

Gopal V commented on HIVE-14837:


Thanks [~rajesh.balamohan] for confirming.

[~taoli-hwx]: LGTM - +1 tests pending.

> JDBC: standalone jar is missing hadoop core dependencies
> 
>
> Key: HIVE-14837
> URL: https://issues.apache.org/jira/browse/HIVE-14837
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Tao Li
> Attachments: HIVE-14837.1.patch
>
>
> {code}
> 2016/09/24 00:31:57 ERROR - jmeter.threads.JMeterThread: Test failed! 
> java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
> at 
> org.apache.hive.jdbc.HiveConnection.createUnderlyingTransport(HiveConnection.java:418)
> at 
> org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:438)
> at 
> org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:225)
> at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:182)
> at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14837) JDBC: standalone jar is missing hadoop core dependencies

2016-10-19 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590282#comment-15590282
 ] 

Rajesh Balamohan commented on HIVE-14837:
-

Tried this patch in master; it works and all that is needed is to copy the 
{{hive-jdbc-2.2.0-SNAPSHOT-standalone.jar}} into jmeter-2.13 lib folder.

> JDBC: standalone jar is missing hadoop core dependencies
> 
>
> Key: HIVE-14837
> URL: https://issues.apache.org/jira/browse/HIVE-14837
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Tao Li
> Attachments: HIVE-14837.1.patch
>
>
> {code}
> 2016/09/24 00:31:57 ERROR - jmeter.threads.JMeterThread: Test failed! 
> java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
> at 
> org.apache.hive.jdbc.HiveConnection.createUnderlyingTransport(HiveConnection.java:418)
> at 
> org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:438)
> at 
> org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:225)
> at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:182)
> at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590265#comment-15590265
 ] 

Hive QA commented on HIVE-14993:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834244/HIVE-14993.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 266 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] (batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter1] (batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter4] (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_char1] (batchId=27)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_char2] (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_file_format] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_numbuckets_partitioned_table2_h23]
 (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_numbuckets_partitioned_table_h23]
 (batchId=61)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_change_col]
 (batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_clusterby_sortby]
 (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_format_loc]
 (batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_rename_partition_authorization]
 (batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_skewed_table] 
(batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_cascade] 
(batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_location] 
(batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_not_sorted] 
(batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_partition_drop]
 (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_serde2] 
(batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_serde] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_varchar1] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_varchar2] 
(batchId=20)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_filter] 
(batchId=8)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_groupby] 
(batchId=43)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_limit] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_select] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_table] 
(batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_union] 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_6] 
(batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_admin_almighty2]
 (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_cli_nonsql]
 (batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_owner_actions]
 (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_5] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_add_column2] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_add_column3] 
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_add_column] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_change_schema] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_partitioned] 
(batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_schema_evolution_native]
 (batchId=50)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ba_table2] (batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[binary_table_bincolserde]
 (batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[binary_table_colserde] 
(batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin10] 
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin11] 
(batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin12] 
(batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin13] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin8] 
(batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin9] 
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_annotate_stats_groupby]
 (batchI

[jira] [Commented] (HIVE-14391) TestAccumuloCliDriver is not executed during precommit tests

2016-10-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590248#comment-15590248
 ] 

Ashutosh Chauhan commented on HIVE-14391:
-

I wonder if anyone is using this feature at all. AFAIK, no one. Shall we just 
disable this test for now and think about purging this feature altogether?

> TestAccumuloCliDriver is not executed during precommit tests
> 
>
> Key: HIVE-14391
> URL: https://issues.apache.org/jira/browse/HIVE-14391
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Peter Vary
> Attachments: HIVE-14391.2.patch, HIVE-14391.patch
>
>
> according to for example this build result:
> https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/685/testReport/org.apache.hadoop.hive.cli/
> there is no 'TestAccumuloCliDriver' being run during precommit testing...but 
> i see no reason why and how it was excluded inside the project;
> my maven executes it when i start it with {{-Dtest=TestAccumuloCliDriver}} - 
> so i think the properties/profiles aren't preventing it.
> maybe i miss something obvious ;)
> (note: my TestAccumuloCliDriver executions are failed with errors.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15013) Config dir generated for tests should not be under the test tmp directory

2016-10-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590200#comment-15590200
 ] 

Siddharth Seth commented on HIVE-15013:
---

cc [~prasanth_j] for review.

> Config dir generated for tests should not be under the test tmp directory
> -
>
> Key: HIVE-15013
> URL: https://issues.apache.org/jira/browse/HIVE-15013
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15013.01.patch, HIVE-15013.02.patch
>
>
> mvn is used to clean up tmp directories created for tests, and to setup the 
> config directory. The current structure is 
> target/tmp
> target/tmp/config
> All of this is setup when mvn test is executed.
> Tests generate data under tmp - warehouse, metastore, etc. Having the conf 
> dir there (generated by mvn) makes it complicate to add per test cleanup - 
> since the entire tmp directory cannot be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15007) Hive 1.2.2 release planning

2016-10-19 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590176#comment-15590176
 ] 

Vaibhav Gumashta commented on HIVE-15007:
-

Dummy patch to get a UT run

> Hive 1.2.2 release planning
> ---
>
> Key: HIVE-15007
> URL: https://issues.apache.org/jira/browse/HIVE-15007
> Project: Hive
>  Issue Type: Task
>Affects Versions: 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-15007.branch-1.2.patch
>
>
> Discussed with [~spena] about triggering unit test runs for 1.2.2 release and 
> creating a patch which will trigger precommits looks like a good way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15007) Hive 1.2.2 release planning

2016-10-19 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-15007:

Status: Patch Available  (was: Open)

> Hive 1.2.2 release planning
> ---
>
> Key: HIVE-15007
> URL: https://issues.apache.org/jira/browse/HIVE-15007
> Project: Hive
>  Issue Type: Task
>Affects Versions: 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-15007.branch-1.2.patch
>
>
> Discussed with [~spena] about triggering unit test runs for 1.2.2 release and 
> creating a patch which will trigger precommits looks like a good way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15007) Hive 1.2.2 release planning

2016-10-19 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-15007:

Attachment: HIVE-15007.branch-1.2.patch

> Hive 1.2.2 release planning
> ---
>
> Key: HIVE-15007
> URL: https://issues.apache.org/jira/browse/HIVE-15007
> Project: Hive
>  Issue Type: Task
>Affects Versions: 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-15007.branch-1.2.patch
>
>
> Discussed with [~spena] about triggering unit test runs for 1.2.2 release and 
> creating a patch which will trigger precommits looks like a good way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15008) Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode

2016-10-19 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590174#comment-15590174
 ] 

Prasanth Jayachandran commented on HIVE-15008:
--

lgtm, +1

> Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode
> 
>
> Key: HIVE-15008
> URL: https://issues.apache.org/jira/browse/HIVE-15008
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15008.01.patch, HIVE-15008.02.patch, 
> HIVE-15008.03.patch, HIVE-15008.04.patch, HIVE-15008.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15008) Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode

2016-10-19 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-15008:
--
Attachment: HIVE-15008.05.patch

Updated.

> Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode
> 
>
> Key: HIVE-15008
> URL: https://issues.apache.org/jira/browse/HIVE-15008
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15008.01.patch, HIVE-15008.02.patch, 
> HIVE-15008.03.patch, HIVE-15008.04.patch, HIVE-15008.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-19 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590171#comment-15590171
 ] 

Vineet Garg commented on HIVE-14913:


[~ashutoshc] Uploaded updated patch

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch, HIVE-14913.8.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14913) Add new unit tests

2016-10-19 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-14913:
---
Status: Open  (was: Patch Available)

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch, HIVE-14913.8.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14913) Add new unit tests

2016-10-19 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-14913:
---
Status: Patch Available  (was: Open)

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch, HIVE-14913.8.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14913) Add new unit tests

2016-10-19 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-14913:
---
Attachment: HIVE-14913.8.patch

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch, HIVE-14913.8.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15008) Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode

2016-10-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590165#comment-15590165
 ] 

Siddharth Seth commented on HIVE-15008:
---

I'll just convert this to a PreCondition.

> Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode
> 
>
> Key: HIVE-15008
> URL: https://issues.apache.org/jira/browse/HIVE-15008
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15008.01.patch, HIVE-15008.02.patch, 
> HIVE-15008.03.patch, HIVE-15008.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15013) Config dir generated for tests should not be under the test tmp directory

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590133#comment-15590133
 ] 

Hive QA commented on HIVE-15013:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834202/HIVE-15013.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1670/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1670/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1670/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834202 - PreCommit-HIVE-Build

> Config dir generated for tests should not be under the test tmp directory
> -
>
> Key: HIVE-15013
> URL: https://issues.apache.org/jira/browse/HIVE-15013
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15013.01.patch, HIVE-15013.02.patch
>
>
> mvn is used to clean up tmp directories created for tests, and to setup the 
> config directory. The current structure is 
> target/tmp
> target/tmp/config
> All of this is setup when mvn test is executed.
> Tests generate data under tmp - warehouse, metastore, etc. Having the conf 
> dir there (generated by mvn) makes it complicate to add per test cleanup - 
> since the entire tmp directory cannot be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15008) Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode

2016-10-19 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590132#comment-15590132
 ] 

Prasanth Jayachandran commented on HIVE-15008:
--

{code}
fs.delete(baseFsDir, true);
{code}

Can you replace the assert with exception? Worried about baseFsDir being empty 
and assertions disabled. Alternatively add check for baseFsDir not being empty 
or enforce atleast 3 level depth and throw if not. 

> Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode
> 
>
> Key: HIVE-15008
> URL: https://issues.apache.org/jira/browse/HIVE-15008
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15008.01.patch, HIVE-15008.02.patch, 
> HIVE-15008.03.patch, HIVE-15008.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15023) SimpleFetchOptimizer needs to optimize limit=0

2016-10-19 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-15023:
---
Status: Patch Available  (was: Open)

[~ashutoshc], could u take a look? It sounds orthogonal to the jira that you 
mentioned. 

> SimpleFetchOptimizer needs to optimize limit=0
> --
>
> Key: HIVE-15023
> URL: https://issues.apache.org/jira/browse/HIVE-15023
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-15023.01.patch
>
>
> on current master
> {code}
> hive> explain select key from src limit 0;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: 0
>   Processor Tree:
> TableScan
>   alias: src
>   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: key (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
> Limit
>   Number of rows: 0
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   ListSink
> Time taken: 7.534 seconds, Fetched: 20 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15023) SimpleFetchOptimizer needs to optimize limit=0

2016-10-19 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-15023:
---
Attachment: HIVE-15023.01.patch

> SimpleFetchOptimizer needs to optimize limit=0
> --
>
> Key: HIVE-15023
> URL: https://issues.apache.org/jira/browse/HIVE-15023
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-15023.01.patch
>
>
> on current master
> {code}
> hive> explain select key from src limit 0;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: 0
>   Processor Tree:
> TableScan
>   alias: src
>   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: key (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
> Limit
>   Number of rows: 0
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   ListSink
> Time taken: 7.534 seconds, Fetched: 20 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15023) SimpleFetchOptimizer needs to optimize limit=0

2016-10-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590060#comment-15590060
 ] 

Ashutosh Chauhan commented on HIVE-15023:
-

This might have some overlap with HIVE-14866

> SimpleFetchOptimizer needs to optimize limit=0
> --
>
> Key: HIVE-15023
> URL: https://issues.apache.org/jira/browse/HIVE-15023
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>
> on current master
> {code}
> hive> explain select key from src limit 0;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: 0
>   Processor Tree:
> TableScan
>   alias: src
>   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: key (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
> Limit
>   Number of rows: 0
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   ListSink
> Time taken: 7.534 seconds, Fetched: 20 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14797) reducer number estimating may lead to data skew

2016-10-19 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590047#comment-15590047
 ] 

Xuefu Zhang commented on HIVE-14797:


I guess my point is that the chance for a user to hit this problem (however 
valid) is slim. As understood, this happens only if the user specifically picks 
31 as parallelism for whatever a reason. BTW, Hive recommends 2^n for number of 
buckets, so 31 bucket is even more anti-pattern.

My concern is whether it's worth the effort or risk to fix. However, please do 
feel free to tackle it completely.

> reducer number estimating may lead to data skew
> ---
>
> Key: HIVE-14797
> URL: https://issues.apache.org/jira/browse/HIVE-14797
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: roncenzhao
>Assignee: roncenzhao
> Attachments: HIVE-14797.2.patch, HIVE-14797.3.patch, 
> HIVE-14797.4.patch, HIVE-14797.patch
>
>
> HiveKey's hash code is generated by multipling by 31 key by key which is 
> implemented in method `ObjectInspectorUtils.getBucketHashCode()`:
> for (int i = 0; i < bucketFields.length; i++) {
>   int fieldHash = ObjectInspectorUtils.hashCode(bucketFields[i], 
> bucketFieldInspectors[i]);
>   hashCode = 31 * hashCode + fieldHash;
> }
> The follow example will lead to data skew:
> I hava two table called tbl1 and tbl2 and they have the same column: a int, b 
> string. The values of column 'a' in both two tables are not skew, but values 
> of column 'b' in both two tables are skew.
> When my sql is "select * from tbl1 join tbl2 on tbl1.a=tbl2.a and 
> tbl1.b=tbl2.b" and the estimated reducer number is 31, it will lead to data 
> skew.
> As we know, the HiveKey's hash code is generated by `hash(a)*31 + hash(b)`. 
> When reducer number is 31 the reducer No. of each row is `hash(b)%31`. In the 
> result, the job will be skew.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15008) Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590030#comment-15590030
 ] 

Hive QA commented on HIVE-15008:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834201/HIVE-15008.04.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
org.apache.hive.spark.client.TestSparkClient.testJobSubmission (batchId=265)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1669/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1669/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1669/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834201 - PreCommit-HIVE-Build

> Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode
> 
>
> Key: HIVE-15008
> URL: https://issues.apache.org/jira/browse/HIVE-15008
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15008.01.patch, HIVE-15008.02.patch, 
> HIVE-15008.03.patch, HIVE-15008.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14999) SparkClientUtilities does not support viewFS

2016-10-19 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590021#comment-15590021
 ] 

Xuefu Zhang commented on HIVE-14999:


+1

> SparkClientUtilities does not support viewFS
> 
>
> Key: HIVE-14999
> URL: https://issues.apache.org/jira/browse/HIVE-14999
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Teruyoshi Zenmyo
> Attachments: HIVE-14999.patch
>
>
> In SparkClientUtilities.urlFromPathString(), viewFS is not considered and 
> jars on viewFS cannot be found.
> This would cause a number of problems, for instance, custom UDFs are not 
> available on viewFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13288) Confusing exception message in DagUtils.localizeResource

2016-10-19 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589954#comment-15589954
 ] 

Wei Zheng commented on HIVE-13288:
--

This likely has been solved by HIVE-12528

> Confusing exception message in DagUtils.localizeResource
> 
>
> Key: HIVE-13288
> URL: https://issues.apache.org/jira/browse/HIVE-13288
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients
>Affects Versions: 1.2.1
>Reporter: Jeff Zhang
>
> I got the following exception when query through hive server2. And check the 
> source code, it it due to some error when copying data from local to hdfs. 
> But the IOException is ignored and assume that it is due to another thread is 
> also writing. I don't think it make sense to assume that, at least should log 
> the IOException. 
> {code}
> LOG.info("Localizing resource because it does not exist: " + src + " to dest: 
> " + dest);
>   try {
> destFS.copyFromLocalFile(false, false, src, dest);
>   } catch (IOException e) {
> LOG.info("Looks like another thread is writing the same file will 
> wait.");
> int waitAttempts =
> 
> conf.getInt(HiveConf.ConfVars.HIVE_LOCALIZE_RESOURCE_NUM_WAIT_ATTEMPTS.varname,
> 
> HiveConf.ConfVars.HIVE_LOCALIZE_RESOURCE_NUM_WAIT_ATTEMPTS.defaultIntVal);
> long sleepInterval = HiveConf.getTimeVar(
> conf, HiveConf.ConfVars.HIVE_LOCALIZE_RESOURCE_WAIT_INTERVAL,
> TimeUnit.MILLISECONDS);
> LOG.info("Number of wait attempts: " + waitAttempts + ". Wait 
> interval: "
> + sleepInterval);
> boolean found = false;
> {code}
> {noformat}
> 2016-03-15 11:25:39,921 INFO  [HiveServer2-Background-Pool: Thread-249]: 
> tez.DagUtils (DagUtils.java:getHiveJarDirectory(876)) - Jar dir is 
> null/directory doesn't exist. Choosing HIVE_INSTALL_DIR - /user/jeff/.hiveJars
> 2016-03-15 11:25:40,058 INFO  [HiveServer2-Background-Pool: Thread-249]: 
> tez.DagUtils (DagUtils.java:localizeResource(952)) - Localizing resource 
> because it does not exist: 
> file:/usr/hdp/2.3.2.0-2950/hive/lib/hive-exec-1.2.1.2.3.2.0-2950.jar to dest: 
> hdfs://sandbox.hortonworks.com:8020/user/jeff/.hiveJars/hive-exec-1.2.1.2.3.2.0-2950-a97c953db414a4f792d868e2b0417578a61ccfa368048016926117b641b07f34.jar
> 2016-03-15 11:25:40,063 INFO  [HiveServer2-Background-Pool: Thread-249]: 
> tez.DagUtils (DagUtils.java:localizeResource(956)) - Looks like another 
> thread is writing the same file will wait.
> 2016-03-15 11:25:40,064 INFO  [HiveServer2-Background-Pool: Thread-249]: 
> tez.DagUtils (DagUtils.java:localizeResource(963)) - Number of wait attempts: 
> 5. Wait interval: 5000
> 2016-03-15 11:25:53,548 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(294)) - Client 
> protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
> 2016-03-15 11:25:53,548 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> metastore.HiveMetaStore (HiveMetaStore.java:logInfo(747)) - 1: Shutting down 
> the object store...
> 2016-03-15 11:25:53,549 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(372)) - 
> ugi=hive/sandbox.hortonworks@example.com   ip=unknown-ip-addr  
> cmd=Shutting down the object store...
> 2016-03-15 11:25:53,549 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> metastore.HiveMetaStore (HiveMetaStore.java:logInfo(747)) - 1: Metastore 
> shutdown complete.
> 2016-03-15 11:25:53,549 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(372)) - 
> ugi=hive/sandbox.hortonworks@example.com   ip=unknown-ip-addr  
> cmd=Metastore shutdown complete.
> 2016-03-15 11:25:53,573 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> session.SessionState (SessionState.java:createPath(641)) - Created local 
> directory: /tmp/e43fbaab-a659-4331-90cb-0ea0b2098e25_resources
> 2016-03-15 11:25:53,577 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> session.SessionState (SessionState.java:createPath(641)) - Created HDFS 
> directory: /tmp/hive/ambari-qa/e43fbaab-a659-4331-90cb-0ea0b2098e25
> 2016-03-15 11:25:53,582 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> session.SessionState (SessionState.java:createPath(641)) - Created local 
> directory: /tmp/hive/e43fbaab-a659-4331-90cb-0ea0b2098e25
> 2016-03-15 11:25:53,587 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> session.SessionState (SessionState.java:createPath(641)) - Created HDFS 
> directory: 
> /tmp/hive/ambari-qa/e43fbaab-a659-4331-90cb-0ea0b2098e25/_tmp_space.db
> 2016-03-15 11:25:53,592 INFO  [HiveServer2-Handler-Pool: Thread-48]: 
> session.HiveSessionImpl (HiveSessionImpl.java:setOperationLogSessionDir(236)) 
> - Operation log session di

[jira] [Commented] (HIVE-14496) Enable Calcite rewriting with materialized views

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589923#comment-15589923
 ] 

Hive QA commented on HIVE-14496:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834195/HIVE-14496.patch

{color:green}SUCCESS:{color} +1 due to 18 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 279 failed/errored test(s), 10571 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key2]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_joins] 
(batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=208)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[allcolref_in_udf] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_join] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_join_pkfk]
 (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_9] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join17] (batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join1] (batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join22] (batchId=50)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join24] (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join2] (batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join3] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_reordering_values]
 (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_stats2] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_stats] 
(batchId=43)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_without_localtask]
 (batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_12] 
(batchId=30)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_SortUnionTransposeRule]
 (batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_auto_join17] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_cross_product_check_2]
 (batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_join1] 
(batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_subq_exists] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_subq_exists] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constprog2] (batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constprog_partitioner] 
(batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer10] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer11] 
(batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer15] 
(batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cross_join] (batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cross_join_merge] 
(batchId=6)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cross_product_check_1] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cross_product_check_2] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cteViews] (batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_mat_3] (batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_mat_4] (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_join2] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[deleteAnalyze] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[escape_comments] 
(batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explain_ddl] (batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_cond_pushdown] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_join_breaktask] 
(batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_join_pushdown] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[index_auto_self_join] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDr

[jira] [Commented] (HIVE-14962) Add integrations tests to test credential provider support using S3 test framework

2016-10-19 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589925#comment-15589925
 ] 

Vihang Karajgaonkar commented on HIVE-14962:


I looked into the possibility of enhancing the S3 test framework in 
hive-blobstore to be able to use hadoop credential provider. But unfortunately, 
the credential provider support for S3 in hadoop is available from 2.8.0 
(HADOOP-12723 and HADOOP-12548). Currently Hive uses hadoop 2.7.2 which does 
not have the credential provider support for S3AFileSystem. Once the hive 
updates to hadoop-2.8.0 we should be able to add the itests for S3 using 
credential provider.

> Add integrations tests to test credential provider support using S3 test 
> framework
> --
>
> Key: HIVE-14962
> URL: https://issues.apache.org/jira/browse/HIVE-14962
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>
> The S3 test framework is newly checked in hive-blobstore project. Currently, 
> it supports adding the S3 keys directly in the configuration file which 
> cannot be directly used to test the credential provider change. In order to 
> test credential provider change, we need to configure the dfs used by the 
> itests to run using credential provider which has the S3 keys. This might 
> involve more work within the S3 tests framework in hive-blobstore project. 
> Creating a sub-task for this under HIVE-14822



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15010) Make LockComponent aware if it's part of dynamic partition operation

2016-10-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-15010:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

committed to master
thanks Wei for the review

> Make LockComponent aware if it's part of dynamic partition operation
> 
>
> Key: HIVE-15010
> URL: https://issues.apache.org/jira/browse/HIVE-15010
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 2.2.0
>
> Attachments: HIVE-15010.patch
>
>
> This is needed by HIVE-10924 but I want to make sure it's a separate patch to 
> make any back porting easier



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-19 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589892#comment-15589892
 ] 

Vineet Garg commented on HIVE-14913:


Ok let me re-generate the patch

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589884#comment-15589884
 ] 

Ashutosh Chauhan commented on HIVE-14913:
-

Alright. Thanks for explanation. However, patch is not applying cleanly 
anymore. Can you update it one more time?

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14964) Failing Test: Fix TestBeelineArgParsing tests

2016-10-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589886#comment-15589886
 ] 

Siddharth Seth commented on HIVE-14964:
---

{code}
openjdk version "1.8.0_102"
OpenJDK Runtime Environment (build 1.8.0_102-8u102-b14.1-1~bpo8+1-b14)
OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)
{code}


[~vgumashta] - any thoughts on what is required here, and whether it is a bug. 
(I believe it is). A potentially simple fix would be to ensure the files being 
traversed are jar files (by extension), before interpreting them as jar files.. 
Would like to understand this better though. Why are we walking through jars, 
and loading each of them individually.

> Failing Test: Fix TestBeelineArgParsing tests
> -
>
> Key: HIVE-14964
> URL: https://issues.apache.org/jira/browse/HIVE-14964
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jason Dere
>
> Failing last several builds:
> {noformat}
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] 0.12 
> sec12
>  
> org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
> 29 ms   12
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] 42 ms   
> 12
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14391) TestAccumuloCliDriver is not executed during precommit tests

2016-10-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589877#comment-15589877
 ] 

Siddharth Seth commented on HIVE-14391:
---

Disabling the Accumulo test again on ptest, until the patch goes in - so that 
this doesn't show up for all runs.

> TestAccumuloCliDriver is not executed during precommit tests
> 
>
> Key: HIVE-14391
> URL: https://issues.apache.org/jira/browse/HIVE-14391
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Peter Vary
> Attachments: HIVE-14391.2.patch, HIVE-14391.patch
>
>
> according to for example this build result:
> https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/685/testReport/org.apache.hadoop.hive.cli/
> there is no 'TestAccumuloCliDriver' being run during precommit testing...but 
> i see no reason why and how it was excluded inside the project;
> my maven executes it when i start it with {{-Dtest=TestAccumuloCliDriver}} - 
> so i think the properties/profiles aren't preventing it.
> maybe i miss something obvious ;)
> (note: my TestAccumuloCliDriver executions are failed with errors.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14391) TestAccumuloCliDriver is not executed during precommit tests

2016-10-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589874#comment-15589874
 ] 

Siddharth Seth commented on HIVE-14391:
---

bq. Since this jira was unassigned I thought it was ok to take it. If not so, I 
am sorry to misbehaving again and you can take it back.
[~pvary] - The jira was unassigned... anyone can take it. If someone has a 
reservation / is already working on it - I think it's up to them to comment 
back. The 'misbehaving' line is kind of funny :)

The patch looks good to me. The only question is whether we want to enable the 
tests or not. [~ashutoshc]? any idea? why was it disabled, is it still 
relevant? Will commit it later if there's no response. I checked the ptest 
node, and the relevant config changes to enable this tests are already in 
place. Runtime is over 2m30s.

> TestAccumuloCliDriver is not executed during precommit tests
> 
>
> Key: HIVE-14391
> URL: https://issues.apache.org/jira/browse/HIVE-14391
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Peter Vary
> Attachments: HIVE-14391.2.patch, HIVE-14391.patch
>
>
> according to for example this build result:
> https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/685/testReport/org.apache.hadoop.hive.cli/
> there is no 'TestAccumuloCliDriver' being run during precommit testing...but 
> i see no reason why and how it was excluded inside the project;
> my maven executes it when i start it with {{-Dtest=TestAccumuloCliDriver}} - 
> so i think the properties/profiles aren't preventing it.
> maybe i miss something obvious ;)
> (note: my TestAccumuloCliDriver executions are failed with errors.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-19 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589848#comment-15589848
 ] 

Vineet Garg commented on HIVE-14913:


acid_globallimit and orc_ppd_basic were modified but first one has been failing 
before and I am not able to reproduce orc_ppd_basic's failure on my local 
system.

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14993:
--
Attachment: HIVE-14993.3.patch

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, HIVE-14993.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589839#comment-15589839
 ] 

Ashutosh Chauhan commented on HIVE-14913:
-

Atleast [orc_ppd_basic] is one modified in this patch. Not sure about others.

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-19 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589836#comment-15589836
 ] 

Vineet Garg commented on HIVE-14913:


No none of them are related

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589834#comment-15589834
 ] 

Ashutosh Chauhan commented on HIVE-14913:
-

Are any of these test failures related?

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15009) ptest - avoid unnecessary cleanup from previous test runs in batch-exec.vm

2016-10-19 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-15009:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

Committed. Thanks for the review.

> ptest - avoid unnecessary cleanup from previous test runs in batch-exec.vm
> --
>
> Key: HIVE-15009
> URL: https://issues.apache.org/jira/browse/HIVE-15009
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.2.0
>
> Attachments: HIVE-15009.01.patch, HIVE-15009.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15022) Missing hs2-connection-timed-out in BeeLine.properties

2016-10-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-15022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-15022:
---
   Resolution: Fixed
Fix Version/s: 2.1.1
   2.2.0
   Status: Resolved  (was: Patch Available)

> Missing hs2-connection-timed-out in BeeLine.properties
> --
>
> Key: HIVE-15022
> URL: https://issues.apache.org/jira/browse/HIVE-15022
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
> Fix For: 2.2.0, 2.1.1
>
> Attachments: HIVE-15022, HIVE-15022.patch
>
>
> If there is a timeout from the Thrift server, the message could not be 
> resolved from the BeeLine.properties.
> Leftover '\' causes it to be merged with the previous line



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15022) Missing hs2-connection-timed-out in BeeLine.properties

2016-10-19 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-15022:
--
Attachment: HIVE-15022.patch

With the correct name :)

> Missing hs2-connection-timed-out in BeeLine.properties
> --
>
> Key: HIVE-15022
> URL: https://issues.apache.org/jira/browse/HIVE-15022
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-15022, HIVE-15022.patch
>
>
> If there is a timeout from the Thrift server, the message could not be 
> resolved from the BeeLine.properties.
> Leftover '\' causes it to be merged with the previous line



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15022) Missing hs2-connection-timed-out in BeeLine.properties

2016-10-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-15022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589813#comment-15589813
 ] 

Sergio Peña commented on HIVE-15022:


Simple patch.
+1

> Missing hs2-connection-timed-out in BeeLine.properties
> --
>
> Key: HIVE-15022
> URL: https://issues.apache.org/jira/browse/HIVE-15022
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-15022, HIVE-15022.patch
>
>
> If there is a timeout from the Thrift server, the message could not be 
> resolved from the BeeLine.properties.
> Leftover '\' causes it to be merged with the previous line



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14753) Track the number of open/closed/abandoned sessions in HS2

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589804#comment-15589804
 ] 

Hive QA commented on HIVE-14753:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834184/HIVE-14753.3.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 10581 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key2]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_joins] 
(batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=208)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[index_auto_partitioned] 
(batchId=10)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1667/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1667/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1667/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834184 - PreCommit-HIVE-Build

> Track the number of open/closed/abandoned sessions in HS2
> -
>
> Key: HIVE-14753
> URL: https://issues.apache.org/jira/browse/HIVE-14753
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive, HiveServer2
>Reporter: Barna Zsombor Klara
>Assignee: Barna Zsombor Klara
> Fix For: 2.2.0
>
> Attachments: HIVE-14753.1.patch, HIVE-14753.2.patch, 
> HIVE-14753.3.patch, HIVE-14753.patch
>
>
> We should be able to track the nr. of sessions since the startup of the HS2 
> instance as well as the average lifetime of a session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15022) Missing hs2-connection-timed-out in BeeLine.properties

2016-10-19 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-15022:
--
Attachment: HIVE-15022

The patch to remove the extra '\'

> Missing hs2-connection-timed-out in BeeLine.properties
> --
>
> Key: HIVE-15022
> URL: https://issues.apache.org/jira/browse/HIVE-15022
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-15022
>
>
> If there is a timeout from the Thrift server, the message could not be 
> resolved from the BeeLine.properties.
> Leftover '\' causes it to be merged with the previous line



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15022) Missing hs2-connection-timed-out in BeeLine.properties

2016-10-19 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-15022:
--
Status: Patch Available  (was: Open)

> Missing hs2-connection-timed-out in BeeLine.properties
> --
>
> Key: HIVE-15022
> URL: https://issues.apache.org/jira/browse/HIVE-15022
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-15022
>
>
> If there is a timeout from the Thrift server, the message could not be 
> resolved from the BeeLine.properties.
> Leftover '\' causes it to be merged with the previous line



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14580) Introduce || operator

2016-10-19 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589740#comment-15589740
 ] 

Pengcheng Xiong commented on HIVE-14580:


LGTM +1

> Introduce || operator
> -
>
> Key: HIVE-14580
> URL: https://issues.apache.org/jira/browse/HIVE-14580
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14580.1.patch, HIVE-14580.2.patch, 
> HIVE-14580.3.patch, HIVE-14580.4.patch
>
>
> Functionally equivalent to concat() udf. But standard allows usage of || for 
> string concatenations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14642) handle insert overwrite for MM tables

2016-10-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14642:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to the feature branch

> handle insert overwrite for MM tables
> -
>
> Key: HIVE-14642
> URL: https://issues.apache.org/jira/browse/HIVE-14642
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14642.01.patch, HIVE-14642.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14996) handle load for MM tables

2016-10-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14996:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to the feature branch

> handle load for MM tables
> -
>
> Key: HIVE-14996
> URL: https://issues.apache.org/jira/browse/HIVE-14996
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14996.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-14979) Removing stale Zookeeper locks at HiveServer2 initialization

2016-10-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589471#comment-15589471
 ] 

Sergey Shelukhin edited comment on HIVE-14979 at 10/19/16 6:32 PM:
---

Hmm... sorry, I still don't quite understand the problem.

TL;DR the patch makes sense if it is intended to work around some network 
timeouts, or ZK not deleting nodes the way we expect. Otherwise I think we need 
to make sure it's compatible with timeout logic and/or just use ZK expiration.

TL:
Do the locks in ZK already expire at some point after HS2 dies? 
If the locks don't expire, we should make them expire as per below ;)
If they do...
>From my understanding, ZK cleans up ephemeral nodes immediately when the 
>process goes down in normal case (based on the connection breaking), 
>regardless of the timeout set for session (that is more of a network timeout 
>and would result in nodes being cleaned up if the connection doesn't 
>immediately break or in other "abnormal" cases). 
Is the timeout we add some additional logical timeout on top of normal cleanup, 
so that even when HS2 dies and the connection is broken, ZK doesn't clean up 
the nodes for some time after the disconnect?

If yes, and we set a large timeout for a reason, we should not clean them up 
before timeout. The reason for a large timeout could be that the locks are 
taken for external jobs that don't die immediately (or at all?) when HS2 dies.
If yes, and we set a large timeout for no good reason (=> we believe we can 
clean them up during startup, as we do in the patch), we should also reduce the 
timeout (or remove it and use the default).







was (Author: sershe):
Hmm... sorry, I still don't quite understand the problem.

TL;DR the patch makes sense if it is to work around some network timeouts, or 
ZK not deleting nodes the way we expect. Otherwise I think we need to make sure 
it's compatible with timeout logic and/or just use ZK expiration.

TL:
Do the locks in ZK already expire at some point after HS2 dies? 
If the locks don't expire, we should make them expire as per below ;)
If they do...
>From my understanding, ZK cleans up ephemeral nodes immediately when the 
>process goes down in normal case (based on the connection breaking), 
>regardless of the timeout set for session (that is more of a network timeout 
>and would result in nodes being cleaned up if the connection doesn't 
>immediately break or in other "abnormal" cases). 
Is the timeout we add some additional logical timeout on top of normal cleanup, 
so that even when HS2 dies and the connection is broken, ZK doesn't clean up 
the nodes for some time after the disconnect?

If yes, and we set a large timeout for a reason, we should not clean them up 
before timeout. The reason for a large timeout could be that the locks are 
taken for external jobs that don't die immediately (or at all?) when HS2 dies.
If yes, and we set a large timeout for no good reason (=> we believe we can 
clean them up during startup, as we do in the patch), we should also reduce the 
timeout (or remove it and use the default).






> Removing stale Zookeeper locks at HiveServer2 initialization
> 
>
> Key: HIVE-14979
> URL: https://issues.apache.org/jira/browse/HIVE-14979
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14979.3.patch, HIVE-14979.4.patch, HIVE-14979.patch
>
>
> HiveServer2 could use Zookeeper to store token that indicate that particular 
> tables are locked with the creation of persistent Zookeeper objects. 
> A problem can occur when a HiveServer2 instance creates a lock on a table and 
> the HiveServer2 instances crashes ("Out of Memory" for example) and the locks 
> are not released in Zookeeper. This lock will then remain until it is 
> manually cleared by an admin.
> There should be a way to remove stale locks at HiveServer2 initialization, 
> helping the admins life.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14979) Removing stale Zookeeper locks at HiveServer2 initialization

2016-10-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589471#comment-15589471
 ] 

Sergey Shelukhin commented on HIVE-14979:
-

Hmm... sorry, I still don't quite understand the problem.

TL;DR the patch makes sense if it is to work around some network timeouts, or 
ZK not deleting nodes the way we expect. Otherwise I think we need to make sure 
it's compatible with timeout logic and/or just use ZK expiration.

TL:
Do the locks in ZK already expire at some point after HS2 dies? 
If the locks don't expire, we should make them expire as per below ;)
If they do...
>From my understanding, ZK cleans up ephemeral nodes immediately when the 
>process goes down in normal case (based on the connection breaking), 
>regardless of the timeout set for session (that is more of a network timeout 
>and would result in nodes being cleaned up if the connection doesn't 
>immediately break or in other "abnormal" cases). 
Is the timeout we add some additional logical timeout on top of normal cleanup, 
so that even when HS2 dies and the connection is broken, ZK doesn't clean up 
the nodes for some time after the disconnect?

If yes, and we set a large timeout for a reason, we should not clean them up 
before timeout. The reason for a large timeout could be that the locks are 
taken for external jobs that don't die immediately (or at all?) when HS2 dies.
If yes, and we set a large timeout for no good reason (=> we believe we can 
clean them up during startup, as we do in the patch), we should also reduce the 
timeout (or remove it and use the default).






> Removing stale Zookeeper locks at HiveServer2 initialization
> 
>
> Key: HIVE-14979
> URL: https://issues.apache.org/jira/browse/HIVE-14979
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14979.3.patch, HIVE-14979.4.patch, HIVE-14979.patch
>
>
> HiveServer2 could use Zookeeper to store token that indicate that particular 
> tables are locked with the creation of persistent Zookeeper objects. 
> A problem can occur when a HiveServer2 instance creates a lock on a table and 
> the HiveServer2 instances crashes ("Out of Memory" for example) and the locks 
> are not released in Zookeeper. This lock will then remain until it is 
> manually cleared by an admin.
> There should be a way to remove stale locks at HiveServer2 initialization, 
> helping the admins life.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-11394) Enhance EXPLAIN display for vectorization

2016-10-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589439#comment-15589439
 ] 

Sergey Shelukhin edited comment on HIVE-11394 at 10/19/16 6:18 PM:
---

I keep merging stuff into a feature branch and every time I do, this patch is 
in a different state. 
It's a Schrodinger patch, you never know if it's committed or reverted until 
you try to merge.


was (Author: sershe):
I keep merging stuff into feature branch and every time this patch is in a 
different state. 
It's a Schrodinger patch, you never know if it's committed or reverted until 
you try to merge.

> Enhance EXPLAIN display for vectorization
> -
>
> Key: HIVE-11394
> URL: https://issues.apache.org/jira/browse/HIVE-11394
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 2.2.0
>
> Attachments: HIVE-11394.01.patch, HIVE-11394.02.patch, 
> HIVE-11394.03.patch, HIVE-11394.04.patch, HIVE-11394.05.patch, 
> HIVE-11394.06.patch, HIVE-11394.07.patch, HIVE-11394.08.patch, 
> HIVE-11394.09.patch, HIVE-11394.091.patch, HIVE-11394.092.patch, 
> HIVE-11394.093.patch
>
>
> Add detail to the EXPLAIN output showing why a Map and Reduce work is not 
> vectorized.
> New syntax is: EXPLAIN VECTORIZATION \[ONLY\] 
> \[SUMMARY|OPERATOR|EXPRESSION|DETAIL\]
> The ONLY option suppresses most non-vectorization elements.
> SUMMARY shows vectorization information for the PLAN (is vectorization 
> enabled) and a summary of Map and Reduce work.
> OPERATOR shows vectorization information for operators.  E.g. Filter 
> Vectorization.  It includes all information of SUMMARY, too.
> EXPRESSION shows vectorization information for expressions.  E.g. 
> predicateExpression.  It includes all information of SUMMARY and OPERATOR, 
> too.
> DETAIL shows very vectorization information.
> It includes all information of SUMMARY, OPERATOR, and EXPRESSION too.
> The optional clause defaults are not ONLY and SUMMARY.
> ---
> Here are some examples:
> EXPLAIN VECTORIZATION example:
> (Note the PLAN VECTORIZATION, Map Vectorization, Reduce Vectorization 
> sections)
> Since SUMMARY is the default, it is the output of EXPLAIN VECTORIZATION 
> SUMMARY.
> Under Reducer 3’s "Reduce Vectorization:" you’ll see
> notVectorizedReason: Aggregation Function UDF avg parameter expression for 
> GROUPBY operator: Data type struct of 
> Column\[VALUE._col2\] not supported
> For Reducer 2’s "Reduce Vectorization:" you’ll see "groupByVectorOutput:": 
> "false" which says a node has a GROUP BY with an AVG or some other aggregator 
> that outputs a non-PRIMITIVE type (e.g. STRUCT) and all downstream operators 
> are row-mode.  I.e. not vector output.
> If "usesVectorUDFAdaptor:": "false" were true, it would say there was at 
> least one vectorized expression is using VectorUDFAdaptor.
> And, "allNative:": "false" will be true when all operators are native.  
> Today, GROUP BY and FILE SINK are not native.  MAP JOIN and REDUCE SINK are 
> conditionally native.  FILTER and SELECT are native.
> {code}
> PLAN VECTORIZATION:
>   enabled: true
>   enabledConditionsMet: [hive.vectorized.execution.enabled IS true]
> STAGE DEPENDENCIES:
>   Stage-1 is a root stage
>   Stage-0 depends on stages: Stage-1
> STAGE PLANS:
>   Stage: Stage-1
> Tez
> ...
>   Edges:
> Reducer 2 <- Map 1 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
> ...
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: alltypesorc
>   Statistics: Num rows: 12288 Data size: 36696 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   Select Operator
> expressions: cint (type: int)
> outputColumnNames: cint
> Statistics: Num rows: 12288 Data size: 36696 Basic stats: 
> COMPLETE Column stats: COMPLETE
> Group By Operator
>   keys: cint (type: int)
>   mode: hash
>   outputColumnNames: _col0
>   Statistics: Num rows: 5775 Data size: 17248 Basic 
> stats: COMPLETE Column stats: COMPLETE
>   Reduce Output Operator
> key expressions: _col0 (type: int)
> sort order: +
> Map-reduce partition columns: _col0 (type: int)
> Statistics: Num rows: 5775 Data size: 17248 Basic 
> stats: COMPLETE Column stats: COMPLETE
> Execution mode: vectorized, llap
> LLAP IO: all 

[jira] [Commented] (HIVE-11394) Enhance EXPLAIN display for vectorization

2016-10-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589439#comment-15589439
 ] 

Sergey Shelukhin commented on HIVE-11394:
-

I keep merging stuff into feature branch and every time this patch is in a 
different state. 
It's a Schrodinger patch, you never know if it's committed or reverted until 
you try to merge.

> Enhance EXPLAIN display for vectorization
> -
>
> Key: HIVE-11394
> URL: https://issues.apache.org/jira/browse/HIVE-11394
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 2.2.0
>
> Attachments: HIVE-11394.01.patch, HIVE-11394.02.patch, 
> HIVE-11394.03.patch, HIVE-11394.04.patch, HIVE-11394.05.patch, 
> HIVE-11394.06.patch, HIVE-11394.07.patch, HIVE-11394.08.patch, 
> HIVE-11394.09.patch, HIVE-11394.091.patch, HIVE-11394.092.patch, 
> HIVE-11394.093.patch
>
>
> Add detail to the EXPLAIN output showing why a Map and Reduce work is not 
> vectorized.
> New syntax is: EXPLAIN VECTORIZATION \[ONLY\] 
> \[SUMMARY|OPERATOR|EXPRESSION|DETAIL\]
> The ONLY option suppresses most non-vectorization elements.
> SUMMARY shows vectorization information for the PLAN (is vectorization 
> enabled) and a summary of Map and Reduce work.
> OPERATOR shows vectorization information for operators.  E.g. Filter 
> Vectorization.  It includes all information of SUMMARY, too.
> EXPRESSION shows vectorization information for expressions.  E.g. 
> predicateExpression.  It includes all information of SUMMARY and OPERATOR, 
> too.
> DETAIL shows very vectorization information.
> It includes all information of SUMMARY, OPERATOR, and EXPRESSION too.
> The optional clause defaults are not ONLY and SUMMARY.
> ---
> Here are some examples:
> EXPLAIN VECTORIZATION example:
> (Note the PLAN VECTORIZATION, Map Vectorization, Reduce Vectorization 
> sections)
> Since SUMMARY is the default, it is the output of EXPLAIN VECTORIZATION 
> SUMMARY.
> Under Reducer 3’s "Reduce Vectorization:" you’ll see
> notVectorizedReason: Aggregation Function UDF avg parameter expression for 
> GROUPBY operator: Data type struct of 
> Column\[VALUE._col2\] not supported
> For Reducer 2’s "Reduce Vectorization:" you’ll see "groupByVectorOutput:": 
> "false" which says a node has a GROUP BY with an AVG or some other aggregator 
> that outputs a non-PRIMITIVE type (e.g. STRUCT) and all downstream operators 
> are row-mode.  I.e. not vector output.
> If "usesVectorUDFAdaptor:": "false" were true, it would say there was at 
> least one vectorized expression is using VectorUDFAdaptor.
> And, "allNative:": "false" will be true when all operators are native.  
> Today, GROUP BY and FILE SINK are not native.  MAP JOIN and REDUCE SINK are 
> conditionally native.  FILTER and SELECT are native.
> {code}
> PLAN VECTORIZATION:
>   enabled: true
>   enabledConditionsMet: [hive.vectorized.execution.enabled IS true]
> STAGE DEPENDENCIES:
>   Stage-1 is a root stage
>   Stage-0 depends on stages: Stage-1
> STAGE PLANS:
>   Stage: Stage-1
> Tez
> ...
>   Edges:
> Reducer 2 <- Map 1 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
> ...
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: alltypesorc
>   Statistics: Num rows: 12288 Data size: 36696 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   Select Operator
> expressions: cint (type: int)
> outputColumnNames: cint
> Statistics: Num rows: 12288 Data size: 36696 Basic stats: 
> COMPLETE Column stats: COMPLETE
> Group By Operator
>   keys: cint (type: int)
>   mode: hash
>   outputColumnNames: _col0
>   Statistics: Num rows: 5775 Data size: 17248 Basic 
> stats: COMPLETE Column stats: COMPLETE
>   Reduce Output Operator
> key expressions: _col0 (type: int)
> sort order: +
> Map-reduce partition columns: _col0 (type: int)
> Statistics: Num rows: 5775 Data size: 17248 Basic 
> stats: COMPLETE Column stats: COMPLETE
> Execution mode: vectorized, llap
> LLAP IO: all inputs
> Map Vectorization:
> enabled: true
> enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true
> groupByVectorOutput: true
> inputFileFormats: 
> org.apache.hadoop.hive.

[jira] [Commented] (HIVE-15010) Make LockComponent aware if it's part of dynamic partition operation

2016-10-19 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589417#comment-15589417
 ] 

Wei Zheng commented on HIVE-15010:
--

+1

> Make LockComponent aware if it's part of dynamic partition operation
> 
>
> Key: HIVE-15010
> URL: https://issues.apache.org/jira/browse/HIVE-15010
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-15010.patch
>
>
> This is needed by HIVE-10924 but I want to make sure it's a separate patch to 
> make any back porting easier



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15011) fix issues with MoveTask.releaseLocks()

2016-10-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589429#comment-15589429
 ] 

Sergey Shelukhin commented on HIVE-15011:
-

Just a note - MoveTask is a misnomer, it does much more than just file move... 
so the long term fix might not be getting rid of it :)

> fix issues with MoveTask.releaseLocks()
> ---
>
> Key: HIVE-15011
> URL: https://issues.apache.org/jira/browse/HIVE-15011
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Planning, Query Processor, Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> in Merge we can have multiple move tasks so releasing the locks should be 
> done on from the "last one" - in practice they run concurrently
> see if there is a quick fix for short term
> (slightly) longer term - get rid of MoveTask for Acid writes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-15017) Random job failures with MapReduce and Tez

2016-10-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589425#comment-15589425
 ] 

Sergey Shelukhin edited comment on HIVE-15017 at 10/19/16 6:09 PM:
---

The app logs appear to be for the AMs.
Can you download the full application logs for Tez or MR apps? (via yarn logs 
... command).
If they don't have anything for the problematic container (e.g. 
containerId=container_1475850791417_0105_01_02, 
nodeId=datanode06.bigdata.fr:60737), it might be possible to go the node and 
try to find container log directory to see its output


was (Author: sershe):
The app logs appear to be for the AMs.
Can you download the full application logs for Tez or MR apps?
If they don't have anything for the problematic container (e.g. 
containerId=container_1475850791417_0105_01_02, 
nodeId=datanode06.bigdata.fr:60737), it might be possible to go the node and 
try to find container log directory to see its output

> Random job failures with MapReduce and Tez
> --
>
> Key: HIVE-15017
> URL: https://issues.apache.org/jira/browse/HIVE-15017
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
> Environment: Hadoop 2.7.2, Hive 2.1.0
>Reporter: Alexandre Linte
>Priority: Critical
> Attachments: hive_cli_mr.txt, hive_cli_tez.txt, 
> nodemanager_logs_mr_job.txt, yarn_syslog_mr_job.txt, yarn_syslog_tez_job.txt
>
>
> Since Hive 2.1.0, we are facing a blocking issue on our cluster. All the jobs 
> are failing randomly on mapreduce and tez as well. 
> In both case, we don't have any ERROR or WARN message in the logs. You can 
> find attached:
> - hive cli output errors 
> - yarn logs for a tez and mapreduce job
> - nodemanager logs (mr only, we have the same logs with tez)
> Note: This issue doesn't exist with Pig jobs (mr + tez), Spark jobs (mr), so 
> this cannot be an Hadoop / Yarn issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15017) Random job failures with MapReduce and Tez

2016-10-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589425#comment-15589425
 ] 

Sergey Shelukhin commented on HIVE-15017:
-

The app logs appear to be for the AMs.
Can you download the full application logs for Tez or MR apps?
If they don't have anything for the problematic container (e.g. 
containerId=container_1475850791417_0105_01_02, 
nodeId=datanode06.bigdata.fr:60737), it might be possible to go the node and 
try to find container log directory to see its output

> Random job failures with MapReduce and Tez
> --
>
> Key: HIVE-15017
> URL: https://issues.apache.org/jira/browse/HIVE-15017
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
> Environment: Hadoop 2.7.2, Hive 2.1.0
>Reporter: Alexandre Linte
>Priority: Critical
> Attachments: hive_cli_mr.txt, hive_cli_tez.txt, 
> nodemanager_logs_mr_job.txt, yarn_syslog_mr_job.txt, yarn_syslog_tez_job.txt
>
>
> Since Hive 2.1.0, we are facing a blocking issue on our cluster. All the jobs 
> are failing randomly on mapreduce and tez as well. 
> In both case, we don't have any ERROR or WARN message in the logs. You can 
> find attached:
> - hive cli output errors 
> - yarn logs for a tez and mapreduce job
> - nodemanager logs (mr only, we have the same logs with tez)
> Note: This issue doesn't exist with Pig jobs (mr + tez), Spark jobs (mr), so 
> this cannot be an Hadoop / Yarn issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14391) TestAccumuloCliDriver is not executed during precommit tests

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589403#comment-15589403
 ] 

Hive QA commented on HIVE-14391:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834121/HIVE-14391.2.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10571 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_globallimit] 
(batchId=27)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[index_bitmap_auto_partitioned]
 (batchId=27)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1666/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1666/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1666/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834121 - PreCommit-HIVE-Build

> TestAccumuloCliDriver is not executed during precommit tests
> 
>
> Key: HIVE-14391
> URL: https://issues.apache.org/jira/browse/HIVE-14391
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Peter Vary
> Attachments: HIVE-14391.2.patch, HIVE-14391.patch
>
>
> according to for example this build result:
> https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/685/testReport/org.apache.hadoop.hive.cli/
> there is no 'TestAccumuloCliDriver' being run during precommit testing...but 
> i see no reason why and how it was excluded inside the project;
> my maven executes it when i start it with {{-Dtest=TestAccumuloCliDriver}} - 
> so i think the properties/profiles aren't preventing it.
> maybe i miss something obvious ;)
> (note: my TestAccumuloCliDriver executions are failed with errors.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14977) Flaky test: fix order_null.q and union_fast_stats.q

2016-10-19 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-14977:
-
  Resolution: Fixed
   Fix Version/s: 2.2.0
Target Version/s: 2.2.0
  Status: Resolved  (was: Patch Available)

Committed to master.

> Flaky test: fix order_null.q and union_fast_stats.q
> ---
>
> Key: HIVE-14977
> URL: https://issues.apache.org/jira/browse/HIVE-14977
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
> Fix For: 2.2.0
>
> Attachments: HIVE-14977.1.patch, HIVE-14977.2.patch, 
> HIVE-14977.3.patch
>
>
> {code}
> Running: diff -a 
> /home/hiveptest/104.155.171.195-hiveptest-0/apache-github-source-source/itests/qtest/target/qfile-results/clientpositive/order_null.q.out
>  
> /home/hiveptest/104.155.171.195-hiveptest-0/apache-github-source-source/ql/src/test/results/clientpositive/order_null.q.out
> 78a79
> > 2   B
> 81d81
> < 2   B
> 91a92
> > 2   B
> 94d94
> < 2   B
> 134a135
> > 2   B
> 137d137
> < 2   B
> 148a149
> > 2   B
> 151d151
> < 2   B
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14977) Flaky test: fix order_null.q and union_fast_stats.q

2016-10-19 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-14977:
-
Affects Version/s: 2.2.0

> Flaky test: fix order_null.q and union_fast_stats.q
> ---
>
> Key: HIVE-14977
> URL: https://issues.apache.org/jira/browse/HIVE-14977
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
> Fix For: 2.2.0
>
> Attachments: HIVE-14977.1.patch, HIVE-14977.2.patch, 
> HIVE-14977.3.patch
>
>
> {code}
> Running: diff -a 
> /home/hiveptest/104.155.171.195-hiveptest-0/apache-github-source-source/itests/qtest/target/qfile-results/clientpositive/order_null.q.out
>  
> /home/hiveptest/104.155.171.195-hiveptest-0/apache-github-source-source/ql/src/test/results/clientpositive/order_null.q.out
> 78a79
> > 2   B
> 81d81
> < 2   B
> 91a92
> > 2   B
> 94d94
> < 2   B
> 134a135
> > 2   B
> 137d137
> < 2   B
> 148a149
> > 2   B
> 151d151
> < 2   B
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3827) LATERAL VIEW with UNION ALL produces incorrect results

2016-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-3827:
-
Labels: Correctness CorrectnessBug  (was: CorrectnessBug)

> LATERAL VIEW with UNION ALL produces incorrect results
> --
>
> Key: HIVE-3827
> URL: https://issues.apache.org/jira/browse/HIVE-3827
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.9.0, 1.2.1
> Environment: hive0.9.0 hadoop 0.20.205
>Reporter: cyril liao
>  Labels: Correctness, CorrectnessBug
>
> LATER VIEW lose data working with union all.
> query NO.1:
> SELECT
> 1 as from_pid,
> 1 as to_pid,
> cid as from_path,
> (CASE WHEN pid=0 THEN cid ELSE pid END) as to_path,
> 0 as status
> FROM
> (SELECT union_map(c_map) AS c_map
> FROM
> (SELECT collect_map(id,parent_id)AS c_map
> FROM
> wl_channels
> GROUP BY id,parent_id
> )tmp
> )tmp2
> LATERAL VIEW recursion_concat(c_map) a AS cid, pid
> this query returns about 1 rows ,and their status is 0.
> query NO.2:
> select
> a.from_pid as from_pid,
> a.to_pid as to_pid, 
> a.from_path as from_path,
> a.to_path as to_path,
> a.status as status
> from wl_dc_channels a
> where a.status <> 0
> this query returns about 100 rows ,and their status is 1 or 2.
> query NO.3:
> select
> from_pid,
> to_pid,
> from_path,
> to_path,
> status
> from
> (
> SELECT
> 1 as from_pid,
> 1 as to_pid,
> cid as from_path,
> (CASE WHEN pid=0 THEN cid ELSE pid END) as to_path,
> 0 as status
> FROM
> (SELECT union_map(c_map) AS c_map
> FROM
> (SELECT collect_map(id,parent_id)AS c_map
> FROM
> wl_channels
> GROUP BY id,parent_id
> )tmp
> )tmp2
> LATERAL VIEW recursion_concat(c_map) a AS cid, pid
> union all
> select
> a.from_pid as from_pid,
> a.to_pid as to_pid, 
> a.from_path as from_path,
> a.to_path as to_path,
> a.status as status
> from wl_dc_channels a
> where a.status <> 0
> ) unin_tbl
> this query has the same result as query NO.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12412) Multi insert queries fail to run properly in hive 1.1.x or later.

2016-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-12412:
--
Labels: Correctness CorrectnessBug  (was: )

> Multi insert queries fail to run properly in hive 1.1.x or later.
> -
>
> Key: HIVE-12412
> URL: https://issues.apache.org/jira/browse/HIVE-12412
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.0
>Reporter: John P. Petrakis
>  Labels: Correctness, CorrectnessBug
>
> We use multi insert queries to take data in one table and manipulate it by 
> inserting it into a results table.  Queries are of this form:
> from (select * from data_table lateral view explode(data_table.f2) f2 as 
> explode_f2) as explode_data_table  
>insert overwrite table results_table partition (q_id='C.P1',rl='1') 
>select 
>array(cast(if(explode_data_table.f1 is null or 
> explode_data_table.f1='', 'UNKNOWN',explode_data_table.f1) as 
> String),cast(explode_f2.s1 as String)) as dimensions, 
>ARRAY(CAST(sum(explode_f2.d1) as Double)) as metrics, 
>null as rownm 
>where (explode_data_table.date_id between 20151016 and 20151016)
>group by 
>if(explode_data_table.f1 is null or explode_data_table.f1='', 
> 'UNKNOWN',explode_data_table.f1),
>explode_f2.s1 
>INSERT OVERWRITE TABLE results_table PARTITION (q_id='C.P2',rl='0') 
>SELECT ARRAY(CAST('Total' as String),CAST('Total' as String)) AS 
> dimensions, 
>ARRAY(CAST(sum(explode_f2.d1) as Double)) AS metrics, 
>null AS rownm 
>WHERE (explode_data_table.date_id BETWEEN 20151016 AND 20151016) 
>INSERT OVERWRITE TABLE results_table PARTITION (q_id='C.P5',rl='0') 
>SELECT 
>ARRAY(CAST('Total' as String)) AS dimensions, 
>ARRAY(CAST(sum(explode_f2.d1) as Double)) AS metrics, 
>null AS rownm 
>WHERE (explode_data_table.date_id BETWEEN 20151016 AND 20151016)
> This query is meant to total a given field of a struct that is potentially a 
> list of structs.  For our test data set, which consists of a single row, the 
> summation yields "Null",  with messages in the hive log of the nature:
> Missing fields! Expected 2 fields but only got 1! Ignoring similar problems.
> or "Extra fields detected..."
> For significantly more data, this query will eventually cause a run time 
> error while processing a column (caused by array index out of bounds 
> exception in one of the lazy binary classes such as LazyBinaryString or 
> LazyBinaryStruct).
> Using the query above from the hive command line, the following data was used:
> (note there are tabs in the data below)
> string oneone:1.0:1.00:10.0,eon:1.0:1.00:100.0
> string twotwo:2.0:2.00:20.0,otw:2.0:2.00:20.0,wott:2.0:2.00:20.0
> string thrthree:3.0:3.00:30.0
> string foufour:4.0:4.00:40.0
> There are two fields, a string, (eg. 'string one') and a list of structs.  
> The following is used to create the table:
> create table if not exists t1 (
>  f1 string, 
>   f2 
> array>
>  )
>   partitioned by (clid string, date_id string) 
>   row format delimited fields 
>  terminated by '09' 
>  collection items terminated by ',' 
>  map keys terminated by ':'
>  lines terminated by '10' 
>  location '/user/hive/warehouse/t1';
> And the following is used to load the data:
> load data local inpath '/path/to/data/file/cplx_test.data2' OVERWRITE  into 
> table t1  partition(client_id='987654321',date_id='20151016');
> The resulting table should yield the following:
> ["string fou","four"] [4.0]   nullC.P11   
> ["string one","eon"]  [1.0]   nullC.P11   
> ["string one","one"]  [1.0]   nullC.P11   
> ["string thr","three"][3.0]   nullC.P11   
> ["string two","otw"]  [2.0]   nullC.P11   
> ["string two","two"]  [2.0]   nullC.P11   
> ["string two","wott"] [2.0]   nullC.P11   
> ["Total","Total"] [15.0]  nullC.P20   
> ["Total"] [15.0]  nullC.P50   
> However what we get is:
> Hive Runtime Error while processing row 
> {"_col2":2.5306499719322744E-258,"_col3":""} (ultimately due to an array 
> index out of bounds exception)
> If we reduce the above data to a SINGLE row, the we don't get an exception 
> but the total fields come out as NULL.
> The ONLY way this query would work is 
> 1) if I added a group by (date_id) or even group by ('') as the last line in 
> the query... or removed the last where clause for the final insert.  (The

[jira] [Updated] (HIVE-14996) handle load for MM tables

2016-10-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14996:

Summary: handle load for MM tables  (was: handle load, import for MM tables)

> handle load for MM tables
> -
>
> Key: HIVE-14996
> URL: https://issues.apache.org/jira/browse/HIVE-14996
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14996.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14996) handle load for MM tables

2016-10-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589374#comment-15589374
 ] 

Sergey Shelukhin commented on HIVE-14996:
-

Will handle import separately, there's some extra logic compared to load.

> handle load for MM tables
> -
>
> Key: HIVE-14996
> URL: https://issues.apache.org/jira/browse/HIVE-14996
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14996.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14969) add test cases for ACID

2016-10-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14969:

Description: 
I think the following tests are added
1) CTAS into transactional table must be transactional.
2) tablesample with buckets from ACID table - judging by HIVE-14967, selecting 
buckets with nested directories may have bugs on Tez
3) insert with union - same reason, if the test doesn't already exist it would 
be nice to see that bases and deltas are processed correctly given that union 
creates 2 directories for the results of the same insert
4) import - need to make sure that import is blocked for acid tables. There are 
many scenarios, from plain exporting an acid table and importing it, to more 
subtle like importing new partitions into an existing acid table (or new data 
into an existing empty acid table)


  was:
I think the following tests are added
1) CTAS into transactional table must be transactional.
2) tablesample with buckets from ACID table - judging by HIVE-14967, selecting 
buckets with nested directories may have bugs on Tez
3) insert with union - same reason, if the test doesn't already exist it would 
be nice to see that bases and deltas are processed correctly given that union 
creates 2 directories for the results of the same insert




> add test cases for ACID
> ---
>
> Key: HIVE-14969
> URL: https://issues.apache.org/jira/browse/HIVE-14969
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>
> I think the following tests are added
> 1) CTAS into transactional table must be transactional.
> 2) tablesample with buckets from ACID table - judging by HIVE-14967, 
> selecting buckets with nested directories may have bugs on Tez
> 3) insert with union - same reason, if the test doesn't already exist it 
> would be nice to see that bases and deltas are processed correctly given that 
> union creates 2 directories for the results of the same insert
> 4) import - need to make sure that import is blocked for acid tables. There 
> are many scenarios, from plain exporting an acid table and importing it, to 
> more subtle like importing new partitions into an existing acid table (or new 
> data into an existing empty acid table)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14979) Removing stale Zookeeper locks at HiveServer2 initialization

2016-10-19 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589366#comment-15589366
 ] 

Vaibhav Gumashta commented on HIVE-14979:
-

[~pvary] The locks on ZK (ZooKeeperHiveLockManager) are not stored as 
persistent ephemeral nodes (curator recipe). The curator recipe is used only in 
service discovery for HS2.

> Removing stale Zookeeper locks at HiveServer2 initialization
> 
>
> Key: HIVE-14979
> URL: https://issues.apache.org/jira/browse/HIVE-14979
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14979.3.patch, HIVE-14979.4.patch, HIVE-14979.patch
>
>
> HiveServer2 could use Zookeeper to store token that indicate that particular 
> tables are locked with the creation of persistent Zookeeper objects. 
> A problem can occur when a HiveServer2 instance creates a lock on a table and 
> the HiveServer2 instances crashes ("Out of Memory" for example) and the locks 
> are not released in Zookeeper. This lock will then remain until it is 
> manually cleared by an admin.
> There should be a way to remove stale locks at HiveServer2 initialization, 
> helping the admins life.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15013) Config dir generated for tests should not be under the test tmp directory

2016-10-19 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-15013:
--
Attachment: HIVE-15013.02.patch

> Config dir generated for tests should not be under the test tmp directory
> -
>
> Key: HIVE-15013
> URL: https://issues.apache.org/jira/browse/HIVE-15013
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15013.01.patch, HIVE-15013.02.patch
>
>
> mvn is used to clean up tmp directories created for tests, and to setup the 
> config directory. The current structure is 
> target/tmp
> target/tmp/config
> All of this is setup when mvn test is executed.
> Tests generate data under tmp - warehouse, metastore, etc. Having the conf 
> dir there (generated by mvn) makes it complicate to add per test cleanup - 
> since the entire tmp directory cannot be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15008) Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode

2016-10-19 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-15008:
--
Attachment: HIVE-15008.04.patch

Updated. Also changes TestBeelineConnectionUsingHiveSite to not use fixed ports.

> Cleanup local workDir when MiniHS2 starts up in FS_ONLY mode
> 
>
> Key: HIVE-15008
> URL: https://issues.apache.org/jira/browse/HIVE-15008
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15008.01.patch, HIVE-15008.02.patch, 
> HIVE-15008.03.patch, HIVE-15008.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14459) TestBeeLineDriver - migration and re-enable

2016-10-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589287#comment-15589287
 ] 

Hive QA commented on HIVE-14459:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834169/HIVE-14459.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 10572 tests 
executed
*Failed tests:*
{noformat}
TestBeelineWithHS2ConnectionFile - did not produce a TEST-*.xml file (likely 
timed out) (batchId=199)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key2]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_joins] 
(batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=208)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=208)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_globallimit] 
(batchId=27)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_bulk] 
(batchId=89)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=157)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=157)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1665/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1665/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1665/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834169 - PreCommit-HIVE-Build

> TestBeeLineDriver - migration and re-enable
> ---
>
> Key: HIVE-14459
> URL: https://issues.apache.org/jira/browse/HIVE-14459
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Peter Vary
> Attachments: HIVE-14459.2.patch, HIVE-14459.3.patch, 
> HIVE-14459.4.patch, HIVE-14459.patch
>
>
> this test have been left behind in HIVE-1 because it had some compile 
> issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13423) Handle the overflow case for decimal datatype for sum()

2016-10-19 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589285#comment-15589285
 ] 

Xuefu Zhang commented on HIVE-13423:


+1

> Handle the overflow case for decimal datatype for sum()
> ---
>
> Key: HIVE-13423
> URL: https://issues.apache.org/jira/browse/HIVE-13423
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13423.1.patch
>
>
> When a column col1 defined as decimal and if the sum of the column overflows, 
> we will try to increase the decimal precision by 10. But if it's reaching 38 
> (the max precision), the overflow still could happen. Right now, if such case 
> happens, the following exception will throw since hive is writing incorrect 
> data.
> Follow the following steps to repro. 
> {noformat}
> CREATE TABLE DECIMAL_PRECISION(dec decimal(38,18));
> INSERT INTO DECIMAL_PRECISION VALUES(98765432109876543210.12345), 
> (98765432109876543210.12345);
> SELECT SUM(dec) FROM DECIMAL_PRECISION;
> {noformat}
> {noformat}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.readVInt(LazyBinaryUtils.java:314)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:219)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:142)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13423) Handle the overflow case for decimal datatype for sum()

2016-10-19 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589253#comment-15589253
 ] 

Chaoyu Tang commented on HIVE-13423:


Sound good to me as well.

> Handle the overflow case for decimal datatype for sum()
> ---
>
> Key: HIVE-13423
> URL: https://issues.apache.org/jira/browse/HIVE-13423
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13423.1.patch
>
>
> When a column col1 defined as decimal and if the sum of the column overflows, 
> we will try to increase the decimal precision by 10. But if it's reaching 38 
> (the max precision), the overflow still could happen. Right now, if such case 
> happens, the following exception will throw since hive is writing incorrect 
> data.
> Follow the following steps to repro. 
> {noformat}
> CREATE TABLE DECIMAL_PRECISION(dec decimal(38,18));
> INSERT INTO DECIMAL_PRECISION VALUES(98765432109876543210.12345), 
> (98765432109876543210.12345);
> SELECT SUM(dec) FROM DECIMAL_PRECISION;
> {noformat}
> {noformat}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.readVInt(LazyBinaryUtils.java:314)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:219)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:142)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-14971) Hive returns incorrect result when NOT used with <=> (null safe equals) operator

2016-10-19 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589255#comment-15589255
 ] 

Sahil Takiar edited comment on HIVE-14971 at 10/19/16 4:58 PM:
---

These queries doesn't work either:
* {{select not(2 <=> NULL);}}
* {{select not(NULL <=> NULL);}}

In Hive they both return {{NULL}}, in MySQL they return false and true, 
respectively.


was (Author: stakiar):
These queries doesn't work either:
* {{select not(2 <=> NULL);}}
* {{select not(NULL <=> NULL);}}

They both return {{NULL}}, in MySQL they return false and true, respectively.

> Hive returns incorrect result when NOT used with <=> (null safe equals) 
> operator
> 
>
> Key: HIVE-14971
> URL: https://issues.apache.org/jira/browse/HIVE-14971
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>
> Hive returns incorrect results when using a <=> inside a NOT operator:
> {code}
> create table table1 (int_col_1 int, int_col_2 int);
> insert into table1 values (1, 2), (3, 3), (null, 4), (5, null), (null, null);
> select * from table1;
> +---+---+
> | table1.int_col_1  | table1.int_col_2  |
> +---+---+
> | 1 | 2 |
> | 3 | 3 |
> | NULL  | 4 |
> | 5 | NULL  |
> | NULL  | NULL  |
> +---+---+
> {code}
> The following query returns incorrect results: {{select int_col_1 from table1 
> where not(int_col_1 <=> int_col_2)}} returns
> {code}
> ++
> | int_col_1  |
> ++
> | 1  |
> ++
> {code}
> Where it should return {{1, NULL, 5}}
> Here is another query that returns incorrect results: {{select *, 
> not(int_col_1 <=> int_col_2) from table1}}
> {code}
> +---+---++
> | table1.int_col_1  | table1.int_col_2  |  _c1   |
> +---+---++
> | 1 | 2 | true   |
> | 3 | 3 | false  |
> | NULL  | 4 | NULL   |
> | 5 | NULL  | NULL   |
> | NULL  | NULL  | NULL   |
> +---+---++
> {code}
> The column {{_c1}} should not be returning {{NULL}} for the last three rows. 
> This could be a bug in the NOT operator because the query {{select *, 
> int_col_1 <=> int_col_2 from table1}} returns:
> {code}
> +---+---++
> | table1.int_col_1  | table1.int_col_2  |  _c1   |
> +---+---++
> | 1 | 2 | false  |
> | 3 | 3 | true   |
> | NULL  | 4 | false  |
> | 5 | NULL  | false  |
> | NULL  | NULL  | true   |
> +---+---++
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14971) Hive returns incorrect result when NOT used with <=> (null safe equals) operator

2016-10-19 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589255#comment-15589255
 ] 

Sahil Takiar commented on HIVE-14971:
-

These queries doesn't work either:
* {{select not(2 <=> NULL);}}
* {{select not(NULL <=> NULL);}}

They both return {{NULL}}, in MySQL they return false and true, respectively.

> Hive returns incorrect result when NOT used with <=> (null safe equals) 
> operator
> 
>
> Key: HIVE-14971
> URL: https://issues.apache.org/jira/browse/HIVE-14971
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>
> Hive returns incorrect results when using a <=> inside a NOT operator:
> {code}
> create table table1 (int_col_1 int, int_col_2 int);
> insert into table1 values (1, 2), (3, 3), (null, 4), (5, null), (null, null);
> select * from table1;
> +---+---+
> | table1.int_col_1  | table1.int_col_2  |
> +---+---+
> | 1 | 2 |
> | 3 | 3 |
> | NULL  | 4 |
> | 5 | NULL  |
> | NULL  | NULL  |
> +---+---+
> {code}
> The following query returns incorrect results: {{select int_col_1 from table1 
> where not(int_col_1 <=> int_col_2)}} returns
> {code}
> ++
> | int_col_1  |
> ++
> | 1  |
> ++
> {code}
> Where it should return {{1, NULL, 5}}
> Here is another query that returns incorrect results: {{select *, 
> not(int_col_1 <=> int_col_2) from table1}}
> {code}
> +---+---++
> | table1.int_col_1  | table1.int_col_2  |  _c1   |
> +---+---++
> | 1 | 2 | true   |
> | 3 | 3 | false  |
> | NULL  | 4 | NULL   |
> | 5 | NULL  | NULL   |
> | NULL  | NULL  | NULL   |
> +---+---++
> {code}
> The column {{_c1}} should not be returning {{NULL}} for the last three rows. 
> This could be a bug in the NOT operator because the query {{select *, 
> int_col_1 <=> int_col_2 from table1}} returns:
> {code}
> +---+---++
> | table1.int_col_1  | table1.int_col_2  |  _c1   |
> +---+---++
> | 1 | 2 | false  |
> | 3 | 3 | true   |
> | NULL  | 4 | false  |
> | 5 | NULL  | false  |
> | NULL  | NULL  | true   |
> +---+---++
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >