[jira] [Updated] (HIVE-12064) prevent transactional=false
[ https://issues.apache.org/jira/browse/HIVE-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-12064: - Attachment: HIVE-12064.3.patch patch 3 for test > prevent transactional=false > --- > > Key: HIVE-12064 > URL: https://issues.apache.org/jira/browse/HIVE-12064 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng >Priority: Critical > Attachments: HIVE-12064.2.patch, HIVE-12064.3.patch, HIVE-12064.patch > > > currently a tblproperty transactional=true must be set to make a table behave > in ACID compliant way. > This is misleading in that it seems like changing it to transactional=false > makes the table non-acid but on disk layout of acid table is different than > plain tables. So changing this property may cause wrong data to be returned. > Should prevent transactional=false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12988) Improve dynamic partition loading IV
[ https://issues.apache.org/jira/browse/HIVE-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138437#comment-15138437 ] Hive QA commented on HIVE-12988: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12786767/HIVE-12988.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6915/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6915/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6915/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]] + export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + export PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6915/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 2a0dc3c..7cfeaef master -> origin/master + git reset --hard HEAD HEAD is now at 2a0dc3c HIVE-13025 : need a better error message for when one needs to run schematool (Sergey Shelukhin, reviewed by Prasanth Jayachandran, Sushanth Sowmyan, Alan Gates) + git clean -f -d Removing ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java.orig Removing ql/src/test/queries/clientnegative/updateBasicStats.q Removing ql/src/test/queries/clientpositive/updateBasicStats.q Removing ql/src/test/results/clientnegative/updateBasicStats.q.out Removing ql/src/test/results/clientpositive/updateBasicStats.q.out + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. + git reset --hard origin/master HEAD is now at 7cfeaef HIVE-12839: Upgrade Hive to Calcite 1.6 (Pengcheng Xiong, reviewed by Ashutosh Chauhan) + git merge --ff-only origin/master Already up-to-date. + git gc + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12786767 - PreCommit-HIVE-TRUNK-Build > Improve dynamic partition loading IV > > > Key: HIVE-12988 > URL: https://issues.apache.org/jira/browse/HIVE-12988 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Affects Versions: 1.2.0, 2.0.0 >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-12988.2.patch, HIVE-12988.2.patch, HIVE-12988.patch > > > Parallelize copyFiles() -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12730) MetadataUpdater: provide a mechanism to edit the basic statistics of a table (or a partition)
[ https://issues.apache.org/jira/browse/HIVE-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138433#comment-15138433 ] Hive QA commented on HIVE-12730: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12786763/HIVE-12730.06.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6914/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6914/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6914/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: java.io.IOException: Could not create /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6914/succeeded/TestAggregateStatsCache {noformat} This message is automatically generated. ATTACHMENT ID: 12786763 - PreCommit-HIVE-TRUNK-Build > MetadataUpdater: provide a mechanism to edit the basic statistics of a table > (or a partition) > - > > Key: HIVE-12730 > URL: https://issues.apache.org/jira/browse/HIVE-12730 > Project: Hive > Issue Type: New Feature >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-12730.01.patch, HIVE-12730.02.patch, > HIVE-12730.03.patch, HIVE-12730.04.patch, HIVE-12730.05.patch, > HIVE-12730.06.patch > > > We would like to provide a way for developers/users to modify the numRows and > dataSize for a table/partition. Right now although they are part of the table > properties, they will be set to -1 when the task is not coming from a > statsTask. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11749) Deadlock of fetching InputFormat table when multiple root stage
[ https://issues.apache.org/jira/browse/HIVE-11749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138431#comment-15138431 ] Takanobu Asanuma commented on HIVE-11749: - [~lewuathe] Thank you for the contribution. Our HiveServer2 got deadlock recently, and I think this bug is the cause of that. Hi [~sershe], [~gopalv] Could you check this jira? It does not seem to be resolved in the latest master branch. > Deadlock of fetching InputFormat table when multiple root stage > --- > > Key: HIVE-11749 > URL: https://issues.apache.org/jira/browse/HIVE-11749 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.0 >Reporter: Ryu Kobayashi >Assignee: Kai Sasaki > Attachments: HIVE-11749.00.patch, HIVE-11749.stack-tarace.txt > > > But not always, to deadlock when it run the query. Environment are as follows: > * Hadoop 2.6.0 > * Hive 0.13 > * JDK 1.7.0_79 > It will attach the stack trace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6
[ https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138427#comment-15138427 ] Pengcheng Xiong commented on HIVE-12839: The failed tests are known ones. Committed to master. Thanks [~ashutoshc], [~jcamachorodriguez], [~julianhyde] for the comments. > Upgrade Hive to Calcite 1.6 > --- > > Key: HIVE-12839 > URL: https://issues.apache.org/jira/browse/HIVE-12839 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Fix For: 2.1.0 > > Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, > HIVE-12839.03.patch, HIVE-12839.04.patch, HIVE-12839.05.patch > > > CLEAR LIBRARY CACHE > Upgrade Hive to Calcite 1.6.0-incubating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12839) Upgrade Hive to Calcite 1.6
[ https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-12839: --- Fix Version/s: 2.1.0 > Upgrade Hive to Calcite 1.6 > --- > > Key: HIVE-12839 > URL: https://issues.apache.org/jira/browse/HIVE-12839 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Fix For: 2.1.0 > > Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, > HIVE-12839.03.patch, HIVE-12839.04.patch, HIVE-12839.05.patch > > > CLEAR LIBRARY CACHE > Upgrade Hive to Calcite 1.6.0-incubating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12839) Upgrade Hive to Calcite 1.6
[ https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-12839: --- Affects Version/s: (was: 1.2.1) > Upgrade Hive to Calcite 1.6 > --- > > Key: HIVE-12839 > URL: https://issues.apache.org/jira/browse/HIVE-12839 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Fix For: 2.1.0 > > Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, > HIVE-12839.03.patch, HIVE-12839.04.patch, HIVE-12839.05.patch > > > CLEAR LIBRARY CACHE > Upgrade Hive to Calcite 1.6.0-incubating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12839) Upgrade Hive to Calcite 1.6
[ https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-12839: --- Affects Version/s: 2.0.0 1.2.1 > Upgrade Hive to Calcite 1.6 > --- > > Key: HIVE-12839 > URL: https://issues.apache.org/jira/browse/HIVE-12839 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Fix For: 2.1.0 > > Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, > HIVE-12839.03.patch, HIVE-12839.04.patch, HIVE-12839.05.patch > > > CLEAR LIBRARY CACHE > Upgrade Hive to Calcite 1.6.0-incubating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6
[ https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138384#comment-15138384 ] Hive QA commented on HIVE-12839: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12786760/HIVE-12839.05.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10054 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testRevokeTimedOutWorkers org.apache.hive.jdbc.TestSSL.testSSLVersion {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6913/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6913/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6913/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12786760 - PreCommit-HIVE-TRUNK-Build > Upgrade Hive to Calcite 1.6 > --- > > Key: HIVE-12839 > URL: https://issues.apache.org/jira/browse/HIVE-12839 > Project: Hive > Issue Type: Improvement >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, > HIVE-12839.03.patch, HIVE-12839.04.patch, HIVE-12839.05.patch > > > CLEAR LIBRARY CACHE > Upgrade Hive to Calcite 1.6.0-incubating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results
[ https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138325#comment-15138325 ] Takanobu Asanuma commented on HIVE-11527: - Thank you for the advice, Jing! It makes sense to me. If IIUC, we should assume that jdbc client doesn't always have the same configuration files as the cluster side. So we should create the final URI in HiveServer2 while considering the cases Jing suggested, and return to jdbc client. > bypass HiveServer2 thrift interface for query results > - > > Key: HIVE-11527 > URL: https://issues.apache.org/jira/browse/HIVE-11527 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Sergey Shelukhin >Assignee: Takanobu Asanuma > Attachments: HIVE-11527.WIP.patch > > > Right now, HS2 reads query results and returns them to the caller via its > thrift API. > There should be an option for HS2 to return some pointer to results (an HDFS > link?) and for the user to read the results directly off HDFS inside the > cluster, or via something like WebHDFS outside the cluster > Review board link: https://reviews.apache.org/r/40867 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results
[ https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138323#comment-15138323 ] Takanobu Asanuma commented on HIVE-11527: - Sergey, I see. I will reconsider handling multiple files. > bypass HiveServer2 thrift interface for query results > - > > Key: HIVE-11527 > URL: https://issues.apache.org/jira/browse/HIVE-11527 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Sergey Shelukhin >Assignee: Takanobu Asanuma > Attachments: HIVE-11527.WIP.patch > > > Right now, HS2 reads query results and returns them to the caller via its > thrift API. > There should be an option for HS2 to return some pointer to results (an HDFS > link?) and for the user to read the results directly off HDFS inside the > cluster, or via something like WebHDFS outside the cluster > Review board link: https://reviews.apache.org/r/40867 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10187) Avro backed tables don't handle cyclical or recursive records
[ https://issues.apache.org/jira/browse/HIVE-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Wagner updated HIVE-10187: --- Attachment: HIVE-10187.5.patch > Avro backed tables don't handle cyclical or recursive records > - > > Key: HIVE-10187 > URL: https://issues.apache.org/jira/browse/HIVE-10187 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Affects Versions: 1.2.0 >Reporter: Mark Wagner >Assignee: Mark Wagner > Attachments: HIVE-10187.1.patch, HIVE-10187.2.patch, > HIVE-10187.3.patch, HIVE-10187.4.patch, HIVE-10187.5.patch, > HIVE-10187.demo.patch > > > [HIVE-7653] changed the Avro SerDe to make it generate TypeInfos even for > recursive/cyclical schemas. However, any attempt to serialize data which > exploits that ability results in silently dropped fields. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10187) Avro backed tables don't handle cyclical or recursive records
[ https://issues.apache.org/jira/browse/HIVE-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138322#comment-15138322 ] Mark Wagner commented on HIVE-10187: New patch with Anthony's comments addressed. > Avro backed tables don't handle cyclical or recursive records > - > > Key: HIVE-10187 > URL: https://issues.apache.org/jira/browse/HIVE-10187 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Affects Versions: 1.2.0 >Reporter: Mark Wagner >Assignee: Mark Wagner > Attachments: HIVE-10187.1.patch, HIVE-10187.2.patch, > HIVE-10187.3.patch, HIVE-10187.4.patch, HIVE-10187.5.patch, > HIVE-10187.demo.patch > > > [HIVE-7653] changed the Avro SerDe to make it generate TypeInfos even for > recursive/cyclical schemas. However, any attempt to serialize data which > exploits that ability results in silently dropped fields. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13020) Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK
[ https://issues.apache.org/jira/browse/HIVE-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138309#comment-15138309 ] Hive QA commented on HIVE-13020: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12786758/HIVE-13020.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10054 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hive.jdbc.TestSSL.testSSLVersion {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6912/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6912/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6912/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12786758 - PreCommit-HIVE-TRUNK-Build > Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK > -- > > Key: HIVE-13020 > URL: https://issues.apache.org/jira/browse/HIVE-13020 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore, Shims >Affects Versions: 1.2.0, 1.3.0, 1.2.1 > Environment: Linux X86_64 and IBM JDK 8 >Reporter: Greg Senia >Assignee: Greg Senia > Labels: hdp, ibm, ibm-jdk > Attachments: HIVE-13020.patch, hivemetastore_afterpatch.txt, > hivemetastore_beforepatch.txt, hiveserver2_afterpatch.txt, > hiveserver2_beforepatch.txt > > > HiveServer2 and Hive Metastore Zookeeper component is hardcoded to only > support the Oracle/Open JDK. I was performing testing of Hadoop running on > the IBM JDK and discovered this issue and have since drawn up the attached > patch. This looks to resolve the issue in a similar manner as how the Hadoop > core folks handle the IBM JDK. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13027) Async loggers for LLAP
[ https://issues.apache.org/jira/browse/HIVE-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13027: - Attachment: HIVE-13027.1.patch > Async loggers for LLAP > -- > > Key: HIVE-13027 > URL: https://issues.apache.org/jira/browse/HIVE-13027 > Project: Hive > Issue Type: Improvement > Components: Logging >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13027.1.patch > > > LOG4j2's async logger claims to have 6-68 times better performance than > synchronous logger. https://logging.apache.org/log4j/2.x/manual/async.html > We should use that for LLAP. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12999) Tez: Vertex creation reduce NN IPCs
[ https://issues.apache.org/jira/browse/HIVE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138245#comment-15138245 ] Gopal V commented on HIVE-12999: Committed to master and branch-1, thanks [~sershe]. > Tez: Vertex creation reduce NN IPCs > --- > > Key: HIVE-12999 > URL: https://issues.apache.org/jira/browse/HIVE-12999 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 1.2.0, 1.3.0, 2.0.0, 2.1.0 >Reporter: Gopal V >Assignee: Gopal V > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-12999.1.patch > > > Tez vertex building has a decidedly slow path in the code, which is not > related to the DAG plan at all. > The total number of RPC calls is not related to the total number of > operators, due to a bug in the DagUtils inner loops. > {code} > at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877) > at > org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3207) > at > org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3170) > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:548) > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:1151) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:388) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:175) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13015) Update SLF4j version to 1.7.10
[ https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138243#comment-15138243 ] Hive QA commented on HIVE-13015: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12786745/HIVE-13015.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10054 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hive.jdbc.TestSSL.testSSLVersion {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6911/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6911/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6911/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12786745 - PreCommit-HIVE-TRUNK-Build > Update SLF4j version to 1.7.10 > -- > > Key: HIVE-13015 > URL: https://issues.apache.org/jira/browse/HIVE-13015 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13015.1.patch > > > In some of the recent test runs, we are seeing multiple bindings for SLF4j > that causes issues with LOG4j2 logger. > {code} > SLF4J: Found binding in > [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > {code} > We have added explicit exclusions for slf4j-log4j12 but some library is > pulling it transitively and it's getting packaged with hive libs. Also hive > currently uses version 1.7.5 for slf4j. We should add dependency convergence > for sl4fj and also remove packaging of slf4j-log4j12.*.jar -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11589) Invalid value such as '-1' should be checked for 'hive.txn.timeout'.
[ https://issues.apache.org/jira/browse/HIVE-11589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138241#comment-15138241 ] David McWhorter commented on HIVE-11589: Hive 0.14.0 > Invalid value such as '-1' should be checked for 'hive.txn.timeout'. > > > Key: HIVE-11589 > URL: https://issues.apache.org/jira/browse/HIVE-11589 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.2.1 >Reporter: Takahiko Saito >Priority: Minor > > When an user accidentally set an invalid value such as '-1' for > 'hive.txn.timeout', the query simply fails throwing 'NoSuchLockException' > {noformat} > 2015-08-16 23:25:43,149 ERROR [HiveServer2-Background-Pool: Thread-206]: > metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(159)) - > NoSuchLockException(message:No such lock: 40) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.heartbeatLock(TxnHandler.java:1710) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.unlock(TxnHandler.java:501) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.unlock(HiveMetaStore.java:5571) > at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy7.unlock(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.unlock(HiveMetaStoreClient.java:1876) > at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) > at com.sun.proxy.$Proxy8.unlock(Unknown Source) > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.unlock(DbLockManager.java:134) > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.releaseLocks(DbLockManager.java:153) > at > org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:1038) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1208) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The better way to handle such an invalid value is to check the value > beforehand instead of throwing NoSuchLockException. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11589) Invalid value such as '-1' should be checked for 'hive.txn.timeout'.
[ https://issues.apache.org/jira/browse/HIVE-11589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138242#comment-15138242 ] David McWhorter commented on HIVE-11589: Hive 0.14.0 > Invalid value such as '-1' should be checked for 'hive.txn.timeout'. > > > Key: HIVE-11589 > URL: https://issues.apache.org/jira/browse/HIVE-11589 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.2.1 >Reporter: Takahiko Saito >Priority: Minor > > When an user accidentally set an invalid value such as '-1' for > 'hive.txn.timeout', the query simply fails throwing 'NoSuchLockException' > {noformat} > 2015-08-16 23:25:43,149 ERROR [HiveServer2-Background-Pool: Thread-206]: > metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(159)) - > NoSuchLockException(message:No such lock: 40) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.heartbeatLock(TxnHandler.java:1710) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.unlock(TxnHandler.java:501) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.unlock(HiveMetaStore.java:5571) > at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy7.unlock(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.unlock(HiveMetaStoreClient.java:1876) > at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) > at com.sun.proxy.$Proxy8.unlock(Unknown Source) > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.unlock(DbLockManager.java:134) > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.releaseLocks(DbLockManager.java:153) > at > org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:1038) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1208) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The better way to handle such an invalid value is to check the value > beforehand instead of throwing NoSuchLockException. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11866) Add framework to enable testing using LDAPServer using LDAP protocol
[ https://issues.apache.org/jira/browse/HIVE-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-11866: - Attachment: HIVE-11866.4.patch Incorporating feedback from reviewboard. > Add framework to enable testing using LDAPServer using LDAP protocol > > > Key: HIVE-11866 > URL: https://issues.apache.org/jira/browse/HIVE-11866 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 1.3.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Attachments: HIVE-11866.2.patch, HIVE-11866.3.patch, > HIVE-11866.4.patch, HIVE-11866.patch > > > Currently there is no unit test coverage for HS2's LDAP Atn provider using a > LDAP Server on the backend. This prevents testing of the LDAPAtnProvider with > some realistic usecases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11589) Invalid value such as '-1' should be checked for 'hive.txn.timeout'.
[ https://issues.apache.org/jira/browse/HIVE-11589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138240#comment-15138240 ] David McWhorter commented on HIVE-11589: We are facing the exact error described by the stack trace in the description, but hive.txn.timeout is set to its default value of 300. We are using a JDBC client. Could this happen if a query takes longer than 300s to complete? > Invalid value such as '-1' should be checked for 'hive.txn.timeout'. > > > Key: HIVE-11589 > URL: https://issues.apache.org/jira/browse/HIVE-11589 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.2.1 >Reporter: Takahiko Saito >Priority: Minor > > When an user accidentally set an invalid value such as '-1' for > 'hive.txn.timeout', the query simply fails throwing 'NoSuchLockException' > {noformat} > 2015-08-16 23:25:43,149 ERROR [HiveServer2-Background-Pool: Thread-206]: > metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(159)) - > NoSuchLockException(message:No such lock: 40) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.heartbeatLock(TxnHandler.java:1710) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.unlock(TxnHandler.java:501) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.unlock(HiveMetaStore.java:5571) > at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy7.unlock(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.unlock(HiveMetaStoreClient.java:1876) > at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) > at com.sun.proxy.$Proxy8.unlock(Unknown Source) > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.unlock(DbLockManager.java:134) > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.releaseLocks(DbLockManager.java:153) > at > org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:1038) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1208) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The better way to handle such an invalid value is to check the value > beforehand instead of throwing NoSuchLockException. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12790) Metastore connection leaks in HiveServer2
[ https://issues.apache.org/jira/browse/HIVE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-12790: Fix Version/s: 2.0.0 > Metastore connection leaks in HiveServer2 > - > > Key: HIVE-12790 > URL: https://issues.apache.org/jira/browse/HIVE-12790 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 1.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Fix For: 1.3.0, 2.0.0, 2.1.0 > > Attachments: HIVE-12790.2.patch, HIVE-12790.3.patch, > HIVE-12790.patch, snippedLog.txt > > > HiveServer2 keeps opening new connections to HMS each time it launches a > task. These connections do not appear to be closed when the task completes > thus causing a HMS connection leak. "lsof" for the HS2 process shows > connections to port 9083. > {code} > 2015-12-03 04:20:56,352 INFO [HiveServer2-Background-Pool: Thread-424756()]: > ql.Driver (SessionState.java:printInfo(558)) - Launching Job 11 out of 41 > 2015-12-03 04:20:56,354 INFO [Thread-405728()]: hive.metastore > (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with > URI thrift://:9083 > 2015-12-03 04:20:56,360 INFO [Thread-405728()]: hive.metastore > (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, > current connections: 14824 > 2015-12-03 04:20:56,360 INFO [Thread-405728()]: hive.metastore > (HiveMetaStoreClient.java:open(400)) - Connected to metastore. > > 2015-12-03 04:21:06,355 INFO [HiveServer2-Background-Pool: Thread-424756()]: > ql.Driver (SessionState.java:printInfo(558)) - Launching Job 12 out of 41 > 2015-12-03 04:21:06,357 INFO [Thread-405756()]: hive.metastore > (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with > URI thrift://:9083 > 2015-12-03 04:21:06,362 INFO [Thread-405756()]: hive.metastore > (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, > current connections: 14825 > 2015-12-03 04:21:06,362 INFO [Thread-405756()]: hive.metastore > (HiveMetaStoreClient.java:open(400)) - Connected to metastore. > ... > 2015-12-03 04:21:08,357 INFO [HiveServer2-Background-Pool: Thread-424756()]: > ql.Driver (SessionState.java:printInfo(558)) - Launching Job 13 out of 41 > 2015-12-03 04:21:08,360 INFO [Thread-405782()]: hive.metastore > (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with > URI thrift://:9083 > 2015-12-03 04:21:08,364 INFO [Thread-405782()]: hive.metastore > (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, > current connections: 14826 > 2015-12-03 04:21:08,365 INFO [Thread-405782()]: hive.metastore > (HiveMetaStoreClient.java:open(400)) - Connected to metastore. > ... > {code} > The TaskRunner thread starts a new SessionState each time, which creates a > new connection to the HMS (via Hive.get(conf).getMSC()) that is never closed. > Even SessionState.close(), currently not being called by the TaskRunner > thread, does not close this connection. > Attaching a anonymized log snippet where the number of HMS connections > reaches north of 25000+ connections. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13020) Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK
[ https://issues.apache.org/jira/browse/HIVE-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13020: Fix Version/s: (was: 2.1.0) (was: 1.2.2) (was: 1.3.0) > Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK > -- > > Key: HIVE-13020 > URL: https://issues.apache.org/jira/browse/HIVE-13020 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore, Shims >Affects Versions: 1.2.0, 1.3.0, 1.2.1 > Environment: Linux X86_64 and IBM JDK 8 >Reporter: Greg Senia >Assignee: Greg Senia > Labels: hdp, ibm, ibm-jdk > Attachments: HIVE-13020.patch, hivemetastore_afterpatch.txt, > hivemetastore_beforepatch.txt, hiveserver2_afterpatch.txt, > hiveserver2_beforepatch.txt > > > HiveServer2 and Hive Metastore Zookeeper component is hardcoded to only > support the Oracle/Open JDK. I was performing testing of Hadoop running on > the IBM JDK and discovered this issue and have since drawn up the attached > patch. This looks to resolve the issue in a similar manner as how the Hadoop > core folks handle the IBM JDK. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13020) Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK
[ https://issues.apache.org/jira/browse/HIVE-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13020: Fix Version/s: (was: 2.0.0) > Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK > -- > > Key: HIVE-13020 > URL: https://issues.apache.org/jira/browse/HIVE-13020 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore, Shims >Affects Versions: 1.2.0, 1.3.0, 1.2.1 > Environment: Linux X86_64 and IBM JDK 8 >Reporter: Greg Senia >Assignee: Greg Senia > Labels: hdp, ibm, ibm-jdk > Attachments: HIVE-13020.patch, hivemetastore_afterpatch.txt, > hivemetastore_beforepatch.txt, hiveserver2_afterpatch.txt, > hiveserver2_beforepatch.txt > > > HiveServer2 and Hive Metastore Zookeeper component is hardcoded to only > support the Oracle/Open JDK. I was performing testing of Hadoop running on > the IBM JDK and discovered this issue and have since drawn up the attached > patch. This looks to resolve the issue in a similar manner as how the Hadoop > core folks handle the IBM JDK. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.
[ https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138181#comment-15138181 ] Sushanth Sowmyan commented on HIVE-11470: - Backported to branch-1.2 as well. > NPE in DynamicPartFileRecordWriterContainer on null part-keys. > -- > > Key: HIVE-11470 > URL: https://issues.apache.org/jira/browse/HIVE-11470 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 1.2.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Fix For: 2.0.0, 1.2.2, 2.1.0 > > Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch > > > When partitioning data using {{HCatStorer}}, one sees the following NPE, if > the dyn-part-key is of null-value: > {noformat} > 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: java.lang.NullPointerException > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256) > at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) > at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.lang.NullPointerException > at > org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54) > at > org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309) > at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98) > at > org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558) > at > org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) > at > org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471) > ... 11 more > {noformat} > The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an > unfortunate assumption when fetching a local file-writer instance: > {code:title=DynamicPartitionFileRecordWriterContainer.java} > @Override > protected LocalFileWriter getLocalFileWriter(HCatRecord value) > throws IOException, HCatException { > > OutputJobInfo localJobInfo = null; > // Calculate which writer to use from the remaining values - this needs to > // be done before we delete cols. > List dynamicPartValues = new ArrayList(); > for (Integer colToAppend : dynamicPartCols) { > dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES! > } > ... > } > {code} > Must check for null, and substitute with > {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.
[ https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushanth Sowmyan updated HIVE-11470: Fix Version/s: 1.2.2 > NPE in DynamicPartFileRecordWriterContainer on null part-keys. > -- > > Key: HIVE-11470 > URL: https://issues.apache.org/jira/browse/HIVE-11470 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 1.2.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Fix For: 2.0.0, 1.2.2, 2.1.0 > > Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch > > > When partitioning data using {{HCatStorer}}, one sees the following NPE, if > the dyn-part-key is of null-value: > {noformat} > 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: java.lang.NullPointerException > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256) > at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) > at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.lang.NullPointerException > at > org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54) > at > org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309) > at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98) > at > org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558) > at > org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) > at > org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471) > ... 11 more > {noformat} > The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an > unfortunate assumption when fetching a local file-writer instance: > {code:title=DynamicPartitionFileRecordWriterContainer.java} > @Override > protected LocalFileWriter getLocalFileWriter(HCatRecord value) > throws IOException, HCatException { > > OutputJobInfo localJobInfo = null; > // Calculate which writer to use from the remaining values - this needs to > // be done before we delete cols. > List dynamicPartValues = new ArrayList(); > for (Integer colToAppend : dynamicPartCols) { > dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES! > } > ... > } > {code} > Must check for null, and substitute with > {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.
[ https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushanth Sowmyan updated HIVE-11470: Fix Version/s: 2.0.0 > NPE in DynamicPartFileRecordWriterContainer on null part-keys. > -- > > Key: HIVE-11470 > URL: https://issues.apache.org/jira/browse/HIVE-11470 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 1.2.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Fix For: 2.0.0, 2.1.0 > > Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch > > > When partitioning data using {{HCatStorer}}, one sees the following NPE, if > the dyn-part-key is of null-value: > {noformat} > 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: java.lang.NullPointerException > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256) > at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) > at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.lang.NullPointerException > at > org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54) > at > org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309) > at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98) > at > org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558) > at > org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) > at > org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471) > ... 11 more > {noformat} > The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an > unfortunate assumption when fetching a local file-writer instance: > {code:title=DynamicPartitionFileRecordWriterContainer.java} > @Override > protected LocalFileWriter getLocalFileWriter(HCatRecord value) > throws IOException, HCatException { > > OutputJobInfo localJobInfo = null; > // Calculate which writer to use from the remaining values - this needs to > // be done before we delete cols. > List dynamicPartValues = new ArrayList(); > for (Integer colToAppend : dynamicPartCols) { > dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES! > } > ... > } > {code} > Must check for null, and substitute with > {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.
[ https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138161#comment-15138161 ] Sushanth Sowmyan commented on HIVE-11470: - Pushed to branch-2.0 as well. Thanks! > NPE in DynamicPartFileRecordWriterContainer on null part-keys. > -- > > Key: HIVE-11470 > URL: https://issues.apache.org/jira/browse/HIVE-11470 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 1.2.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Fix For: 2.0.0, 2.1.0 > > Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch > > > When partitioning data using {{HCatStorer}}, one sees the following NPE, if > the dyn-part-key is of null-value: > {noformat} > 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: java.lang.NullPointerException > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256) > at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) > at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.lang.NullPointerException > at > org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54) > at > org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309) > at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98) > at > org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558) > at > org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) > at > org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471) > ... 11 more > {noformat} > The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an > unfortunate assumption when fetching a local file-writer instance: > {code:title=DynamicPartitionFileRecordWriterContainer.java} > @Override > protected LocalFileWriter getLocalFileWriter(HCatRecord value) > throws IOException, HCatException { > > OutputJobInfo localJobInfo = null; > // Calculate which writer to use from the remaining values - this needs to > // be done before we delete cols. > List dynamicPartValues = new ArrayList(); > for (Integer colToAppend : dynamicPartCols) { > dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES! > } > ... > } > {code} > Must check for null, and substitute with > {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST
[ https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-12994: --- Attachment: HIVE-12994.02.patch > Implement support for NULLS FIRST/NULLS LAST > > > Key: HIVE-12994 > URL: https://issues.apache.org/jira/browse/HIVE-12994 > Project: Hive > Issue Type: New Feature > Components: CBO, Metastore, Parser, Serializers/Deserializers >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-12994.01.patch, HIVE-12994.02.patch, > HIVE-12994.patch > > > From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to > determine whether nulls appear before or after non-null data values when the > ORDER BY clause is used. > SQL standard does not specify the behavior by default. Currently in Hive, > null values sort as if lower than any non-null value; that is, NULLS FIRST is > the default for ASC order, and NULLS LAST for DESC order. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12254) Improve logging with yarn/hdfs
[ https://issues.apache.org/jira/browse/HIVE-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-12254: -- Attachment: HIVE-12254.2.patch Update. > Improve logging with yarn/hdfs > -- > > Key: HIVE-12254 > URL: https://issues.apache.org/jira/browse/HIVE-12254 > Project: Hive > Issue Type: Bug > Components: Shims >Affects Versions: 1.2.1 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-12254.1.patch, HIVE-12254.2.patch > > > In extension to HIVE-12249, adding info for Yarn/HDFS as well. Both > HIVE-12249 and HDFS-9184 are required (and upgraded in hive for the HDFS > issue) before this can be resolved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST
[ https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-12994: --- Attachment: (was: HIVE-12994.02.patch) > Implement support for NULLS FIRST/NULLS LAST > > > Key: HIVE-12994 > URL: https://issues.apache.org/jira/browse/HIVE-12994 > Project: Hive > Issue Type: New Feature > Components: CBO, Metastore, Parser, Serializers/Deserializers >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-12994.01.patch, HIVE-12994.patch > > > From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to > determine whether nulls appear before or after non-null data values when the > ORDER BY clause is used. > SQL standard does not specify the behavior by default. Currently in Hive, > null values sort as if lower than any non-null value; that is, NULLS FIRST is > the default for ASC order, and NULLS LAST for DESC order. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12878) Support Vectorization for TEXTFILE and other formats
[ https://issues.apache.org/jira/browse/HIVE-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138119#comment-15138119 ] Hive QA commented on HIVE-12878: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12786780/HIVE-12878.01.patch {color:green}SUCCESS:{color} +1 due to 13 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1253 failed/errored test(s), 9718 tests executed *Failed tests:* {noformat} TestSparkCliDriver-auto_join11.q-vector_groupby_3.q-smb_mapjoin_8.q-and-3-more - did not produce a TEST-*.xml file TestSparkCliDriver-auto_join9.q-bucketmapjoin10.q-skewjoinopt19.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-auto_join_reordering_values.q-auto_sortmerge_join_7.q-multigroupby_singlemr.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-avro_decimal_native.q-bucketmapjoin12.q-ppd_outer_join2.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-bucketsortoptimize_insert_7.q-enforce_order.q-join36.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-escape_distributeby1.q-union_remove_7.q-skewjoin_union_remove_2.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby2_noskew_multi_distinct.q-skewjoin_noskew.q-vector_data_types.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby4.q-timestamp_null.q-auto_join23.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby_complex_types.q-vectorization_10.q-join4.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby_grouping_id2.q-bucketmapjoin4.q-groupby7.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-join_cond_pushdown_3.q-groupby7_noskew.q-auto_join13.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-load_dyn_part12.q-nullgroup4_multi_distinct.q-union14.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-load_dyn_part5.q-skewjoinopt8.q-groupby1_noskew.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-ppd_gby_join.q-stats2.q-groupby_rollup1.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-skewjoinopt15.q-bucketmapjoin3.q-auto_join10.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-smb_mapjoin_15.q-auto_sortmerge_join_13.q-auto_join18_multi_distinct.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-smb_mapjoin_4.q-auto_join19.q-mapreduce1.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-stats13.q-groupby6_map.q-join_casesensitive.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-tez_joins_explain.q-input17.q-union29.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-vector_distinct_2.q-load_dyn_part2.q-udf_percentile.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_add_part_multiple org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alias_casted_column org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_coltype org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_varchar2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_analyze_tbl_part org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_deep_filters org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_filter org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_groupby org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_groupby2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_select org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_table org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ansi_sql_arithmetic org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join0 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join14 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join15 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join17 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join18 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join18_multi_distinct org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join19 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join20 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_joi
[jira] [Updated] (HIVE-13002) metastore call timing is not threadsafe
[ https://issues.apache.org/jira/browse/HIVE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13002: Attachment: HIVE-13002.01.patch > metastore call timing is not threadsafe > --- > > Key: HIVE-13002 > URL: https://issues.apache.org/jira/browse/HIVE-13002 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13002.01.patch, HIVE-13002.patch > > > Discovered in some q test run: > {noformat} > TestCliDriver.testCliDriver_insert_values_orig_table:123->runTest:199 > Unexpected exception java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) > at java.util.HashMap$EntryIterator.next(HashMap.java:966) > at java.util.HashMap$EntryIterator.next(HashMap.java:964) > at > org.apache.hadoop.hive.ql.metadata.Hive.dumpAndClearMetaCallTiming(Hive.java:3412) > at > org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:574) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1722) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1342) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12924) CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver groupby_ppr_multi_distinct.q failure
[ https://issues.apache.org/jira/browse/HIVE-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-12924: - Attachment: HIVE-12924.3.patch > CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver > groupby_ppr_multi_distinct.q failure > > > Key: HIVE-12924 > URL: https://issues.apache.org/jira/browse/HIVE-12924 > Project: Hive > Issue Type: Sub-task > Components: CBO >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-12924.1.patch, HIVE-12924.2.patch, > HIVE-12924.3.patch > > > {code} > EXPLAIN EXTENDED > FROM srcpart src > INSERT OVERWRITE TABLE dest1 > SELECT substr(src.key,1,1), count(DISTINCT substr(src.value,5)), > concat(substr(src.key,1,1),sum(substr(src.value,5))), sum(DISTINCT > substr(src.value, 5)), count(DISTINCT src.value) > WHERE src.ds = '2008-04-08' > GROUP BY substr(src.key,1,1) > {code} > Ended Job = job_local968043618_0742 with errors > FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.
[ https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138106#comment-15138106 ] Sergey Shelukhin commented on HIVE-11470: - Sure. Can you ping me when you commit it? I was about to cut another RC after committing HIVE-13025 > NPE in DynamicPartFileRecordWriterContainer on null part-keys. > -- > > Key: HIVE-11470 > URL: https://issues.apache.org/jira/browse/HIVE-11470 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 1.2.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Fix For: 2.1.0 > > Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch > > > When partitioning data using {{HCatStorer}}, one sees the following NPE, if > the dyn-part-key is of null-value: > {noformat} > 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: java.lang.NullPointerException > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256) > at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) > at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.lang.NullPointerException > at > org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54) > at > org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309) > at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98) > at > org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558) > at > org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) > at > org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471) > ... 11 more > {noformat} > The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an > unfortunate assumption when fetching a local file-writer instance: > {code:title=DynamicPartitionFileRecordWriterContainer.java} > @Override > protected LocalFileWriter getLocalFileWriter(HCatRecord value) > throws IOException, HCatException { > > OutputJobInfo localJobInfo = null; > // Calculate which writer to use from the remaining values - this needs to > // be done before we delete cols. > List dynamicPartValues = new ArrayList(); > for (Integer colToAppend : dynamicPartCols) { > dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES! > } > ... > } > {code} > Must check for null, and substitute with > {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12999) Tez: Vertex creation reduce NN IPCs
[ https://issues.apache.org/jira/browse/HIVE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-12999: --- Summary: Tez: Vertex creation reduce NN IPCs (was: Tez: Vertex creation is slowed down when NN throttles IPCs) > Tez: Vertex creation reduce NN IPCs > --- > > Key: HIVE-12999 > URL: https://issues.apache.org/jira/browse/HIVE-12999 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 1.2.0, 1.3.0, 2.0.0, 2.1.0 >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-12999.1.patch > > > Tez vertex building has a decidedly slow path in the code, which is not > related to the DAG plan at all. > The total number of RPC calls is not related to the total number of > operators, due to a bug in the DagUtils inner loops. > {code} > at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877) > at > org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3207) > at > org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3170) > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:548) > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:1151) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:388) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:175) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.
[ https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138098#comment-15138098 ] Sushanth Sowmyan commented on HIVE-11470: - [~sershe], can I get this on 2.0 as well, if you're still adding contenders? I've seen this bug appear on a couple of other reports I've had. > NPE in DynamicPartFileRecordWriterContainer on null part-keys. > -- > > Key: HIVE-11470 > URL: https://issues.apache.org/jira/browse/HIVE-11470 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 1.2.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Fix For: 2.1.0 > > Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch > > > When partitioning data using {{HCatStorer}}, one sees the following NPE, if > the dyn-part-key is of null-value: > {noformat} > 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: java.lang.NullPointerException > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256) > at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) > at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.lang.NullPointerException > at > org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110) > at > org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54) > at > org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309) > at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98) > at > org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558) > at > org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) > at > org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471) > ... 11 more > {noformat} > The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an > unfortunate assumption when fetching a local file-writer instance: > {code:title=DynamicPartitionFileRecordWriterContainer.java} > @Override > protected LocalFileWriter getLocalFileWriter(HCatRecord value) > throws IOException, HCatException { > > OutputJobInfo localJobInfo = null; > // Calculate which writer to use from the remaining values - this needs to > // be done before we delete cols. > List dynamicPartValues = new ArrayList(); > for (Integer colToAppend : dynamicPartCols) { > dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES! > } > ... > } > {code} > Must check for null, and substitute with > {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12856) LLAP: update (add/remove) the UDFs available in LLAP when they are changed (refresh periodically)
[ https://issues.apache.org/jira/browse/HIVE-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138020#comment-15138020 ] Sergey Shelukhin commented on HIVE-12856: - The errors are due to issues in one of the blocking patches that were already fixed in that jira. I think I'll just wait until the two blocking JIRAs are committed. > LLAP: update (add/remove) the UDFs available in LLAP when they are changed > (refresh periodically) > - > > Key: HIVE-12856 > URL: https://issues.apache.org/jira/browse/HIVE-12856 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-12856.nogen.patch, HIVE-12856.patch > > > I don't think re-querying the functions is going to scale, and the sessions > obviously cannot notify all LLAP clusters of every change. We should add > global versioning to metastore functions to track changes, and then possibly > add a notification mechanism, potentially thru ZK to avoid overloading the > metastore itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13025) need a better error message for when one needs to run schematool
[ https://issues.apache.org/jira/browse/HIVE-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13025: Issue Type: Improvement (was: Bug) > need a better error message for when one needs to run schematool > > > Key: HIVE-13025 > URL: https://issues.apache.org/jira/browse/HIVE-13025 > Project: Hive > Issue Type: Improvement >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13025.patch > > > Might as well fix it, since the RC is sunk and this was not obvious to the > people testing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13025) need a better error message for when one needs to run schematool
[ https://issues.apache.org/jira/browse/HIVE-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138010#comment-15138010 ] Sushanth Sowmyan commented on HIVE-13025: - Makes sense. > need a better error message for when one needs to run schematool > > > Key: HIVE-13025 > URL: https://issues.apache.org/jira/browse/HIVE-13025 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13025.patch > > > Might as well fix it, since the RC is sunk and this was not obvious to the > people testing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12558) LLAP: output QueryFragmentCounters somewhere
[ https://issues.apache.org/jira/browse/HIVE-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138000#comment-15138000 ] Sergey Shelukhin commented on HIVE-12558: - Left some comments on RB > LLAP: output QueryFragmentCounters somewhere > > > Key: HIVE-12558 > URL: https://issues.apache.org/jira/browse/HIVE-12558 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Sergey Shelukhin >Assignee: Prasanth Jayachandran > Attachments: HIVE-12558.1.patch, HIVE-12558.wip.patch, > sample-output.png > > > Right now, LLAP logs counters for every fragment; most of them are IO related > and could be very useful, they also include table names so that things like > cache hit ratio, etc., could be calculated for every table. > We need to output them to some metrics system (preserving the breakdown by > table, possibly also adding query ID or even stage) so that they'd be usable > without grep/sed/awk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13026) Pending/running operation metrics are wrong
[ https://issues.apache.org/jira/browse/HIVE-13026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang resolved HIVE-13026. Resolution: Invalid Looked into it and found that we should use the active call related metrics. > Pending/running operation metrics are wrong > --- > > Key: HIVE-13026 > URL: https://issues.apache.org/jira/browse/HIVE-13026 > Project: Hive > Issue Type: Bug >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > > A query is finished, however the pending/running operation count doesn't > decrease. > For example, in TestHs2Metrics::testMetrics(), we have > {noformat} > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, > "api_hs2_operation_PENDING", 1); > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, > "api_hs2_operation_RUNNING", 1); > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.COUNTER, > "hs2_completed_operation_FINISHED", 1); > {noformat} > Should it be below? > {noformat} > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, > "api_hs2_operation_PENDING", 0); > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, > "api_hs2_operation_RUNNING", 0); > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.COUNTER, > "hs2_completed_operation_FINISHED", 1); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results
[ https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137936#comment-15137936 ] Jing Zhao commented on HIVE-11527: -- So IIUC, after replacing hdfs with webhdfs, the new URI will still be resolved by FileSystem and finally a WebHdfsFileSystem instance will be created and used? In that case, I think we can: # If there is both hostname and port contained in the original hdfs URI, both "hdfs" and port need to be replaced # If there is no port in the original URI, this can be a logical URI (for NameNode HA setup). Since WebHdfsFileSystem can also correctly handle logical URI, replacing "hdfs" with "webhdfs" should be good enough # It is also possible the URI only contains host name but no port, and the default port will be loaded from configuration for either hdfs/webhdfs. In that case replacing "hdfs" with "webhdfs" should also work. > bypass HiveServer2 thrift interface for query results > - > > Key: HIVE-11527 > URL: https://issues.apache.org/jira/browse/HIVE-11527 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Sergey Shelukhin >Assignee: Takanobu Asanuma > Attachments: HIVE-11527.WIP.patch > > > Right now, HS2 reads query results and returns them to the caller via its > thrift API. > There should be an option for HS2 to return some pointer to results (an HDFS > link?) and for the user to read the results directly off HDFS inside the > cluster, or via something like WebHDFS outside the cluster > Review board link: https://reviews.apache.org/r/40867 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12993) user and password supplied from URL is overwritten by the empty user and password of the JDBC connection string when it's calling from beeline
[ https://issues.apache.org/jira/browse/HIVE-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137932#comment-15137932 ] Yongzhi Chen commented on HIVE-12993: - The change looks good. +1 > user and password supplied from URL is overwritten by the empty user and > password of the JDBC connection string when it's calling from beeline > -- > > Key: HIVE-12993 > URL: https://issues.apache.org/jira/browse/HIVE-12993 > Project: Hive > Issue Type: Bug > Components: Beeline, JDBC >Affects Versions: 2.0.0, 2.1.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-12993.1.patch > > > When we make the call {{beeline -u > "jdbc:hive2://localhost:1/;user=aaa;password=bbb"}}, the user and > password are overwritten by the blank ones since internally it constructs a > "connect '' '' " call with empty user and password. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12857) LLAP: modify the decider to allow using LLAP with whitelisted UDFs
[ https://issues.apache.org/jira/browse/HIVE-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-12857: Attachment: HIVE-12857.01.patch > LLAP: modify the decider to allow using LLAP with whitelisted UDFs > -- > > Key: HIVE-12857 > URL: https://issues.apache.org/jira/browse/HIVE-12857 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-12857.01.patch, HIVE-12857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12857) LLAP: modify the decider to allow using LLAP with whitelisted UDFs
[ https://issues.apache.org/jira/browse/HIVE-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-12857: Attachment: (was: HIVE-12857.01.patch) > LLAP: modify the decider to allow using LLAP with whitelisted UDFs > -- > > Key: HIVE-12857 > URL: https://issues.apache.org/jira/browse/HIVE-12857 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-12857.01.patch, HIVE-12857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12857) LLAP: modify the decider to allow using LLAP with whitelisted UDFs
[ https://issues.apache.org/jira/browse/HIVE-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-12857: Attachment: HIVE-12857.01.patch Updated. > LLAP: modify the decider to allow using LLAP with whitelisted UDFs > -- > > Key: HIVE-12857 > URL: https://issues.apache.org/jira/browse/HIVE-12857 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-12857.01.patch, HIVE-12857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12941) Unexpected result when using MIN() on struct with NULL in first field
[ https://issues.apache.org/jira/browse/HIVE-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137914#comment-15137914 ] Yongzhi Chen commented on HIVE-12941: - The failures are not related. > Unexpected result when using MIN() on struct with NULL in first field > - > > Key: HIVE-12941 > URL: https://issues.apache.org/jira/browse/HIVE-12941 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.1.0 >Reporter: Jan-Erik Hedbom >Assignee: Yongzhi Chen > Attachments: HIVE-12941.1.patch, HIVE-12941.2.patch > > > Using MIN() on struct with NULL in first field of a row yields NULL as result. > Example: > select min(a) FROM (select 1 as a union all select 2 as a union all select > cast(null as int) as a) tmp; > OK > _c0 > 1 > As expected. But if we wrap it in a struct: > select min(a) FROM (select named_struct("field",1) as a union all select > named_struct("field",2) as a union all select named_struct("field",cast(null > as int)) as a) tmp; > OK > _c0 > NULL > Using MAX() works as expected for structs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13025) need a better error message for when one needs to run schematool
[ https://issues.apache.org/jira/browse/HIVE-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137908#comment-15137908 ] Sergey Shelukhin commented on HIVE-13025: - [~alangates] yes. [~sushanth] the datanucleus callstack is huge and pretty useless. I log it before throwing the exception to the user. > need a better error message for when one needs to run schematool > > > Key: HIVE-13025 > URL: https://issues.apache.org/jira/browse/HIVE-13025 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13025.patch > > > Might as well fix it, since the RC is sunk and this was not obvious to the > people testing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-13025) need a better error message for when one needs to run schematool
[ https://issues.apache.org/jira/browse/HIVE-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137908#comment-15137908 ] Sergey Shelukhin edited comment on HIVE-13025 at 2/8/16 10:44 PM: -- [~alangates] yes. [~sushanth] the datanucleus callstack is huge and pretty useless; on CLI, it pushes the exception message up and out of view. I log it before throwing the exception to the user. was (Author: sershe): [~alangates] yes. [~sushanth] the datanucleus callstack is huge and pretty useless. I log it before throwing the exception to the user. > need a better error message for when one needs to run schematool > > > Key: HIVE-13025 > URL: https://issues.apache.org/jira/browse/HIVE-13025 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13025.patch > > > Might as well fix it, since the RC is sunk and this was not obvious to the > people testing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13025) need a better error message for when one needs to run schematool
[ https://issues.apache.org/jira/browse/HIVE-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137898#comment-15137898 ] Alan Gates commented on HIVE-13025: --- Change looks fine. Is the exception/error message the same for different RDBMS backends? > need a better error message for when one needs to run schematool > > > Key: HIVE-13025 > URL: https://issues.apache.org/jira/browse/HIVE-13025 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13025.patch > > > Might as well fix it, since the RC is sunk and this was not obvious to the > people testing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13025) need a better error message for when one needs to run schematool
[ https://issues.apache.org/jira/browse/HIVE-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137897#comment-15137897 ] Sushanth Sowmyan commented on HIVE-13025: - Looks reasonable to me. +1. That said, any preferences on setting the cause of the SchemaException generated as ex itself? I see both sides to this : On the plus side, it's additional info, and can be used to reason about where the isssue came from in debugging at some point. On the negative side, it is additional info, and clutter, and most of the time, unneeded extra elements in exception stack that gets passed on back. > need a better error message for when one needs to run schematool > > > Key: HIVE-13025 > URL: https://issues.apache.org/jira/browse/HIVE-13025 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13025.patch > > > Might as well fix it, since the RC is sunk and this was not obvious to the > people testing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13025) need a better error message for when one needs to run schematool
[ https://issues.apache.org/jira/browse/HIVE-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137895#comment-15137895 ] Prasanth Jayachandran commented on HIVE-13025: -- lgtm, +1 > need a better error message for when one needs to run schematool > > > Key: HIVE-13025 > URL: https://issues.apache.org/jira/browse/HIVE-13025 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13025.patch > > > Might as well fix it, since the RC is sunk and this was not obvious to the > people testing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13023) Unable to create tables using "STORED AS"
[ https://issues.apache.org/jira/browse/HIVE-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin resolved HIVE-13023. - Resolution: Cannot Reproduce This appears to be a packaging problem with RC1. I will double check RC2 to see that it doesn't happen. > Unable to create tables using "STORED AS" > - > > Key: HIVE-13023 > URL: https://issues.apache.org/jira/browse/HIVE-13023 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Prasanth Jayachandran >Assignee: Sergey Shelukhin >Priority: Blocker > > When testing the new RC for 2.0.0 release, I got the following exception when > creating ORC table > {code} > hive> > > create table src_orc(k string, v int) stored as orc; > Exception in thread "b3a2d83b-bdc2-46f4-82c0-eb79d59590d9 > b3a2d83b-bdc2-46f4-82c0-eb79d59590d9 main" java.lang.AssertionError: Unknown > token: [@-1,0:0='TOK_FILEFORMAT_GENERIC',<715>,0:-1] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:10875) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:9989) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10093) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:229) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:319) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1255) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1301) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1184) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:778) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:717) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:645) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13025) need a better error message for when one needs to run schematool
[ https://issues.apache.org/jira/browse/HIVE-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13025: Attachment: HIVE-13025.patch [~prasanth_j] [~sushanth] [~alangates] can you take a look? I made this a release blocker for now :) > need a better error message for when one needs to run schematool > > > Key: HIVE-13025 > URL: https://issues.apache.org/jira/browse/HIVE-13025 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13025.patch > > > Might as well fix it, since the RC is sunk and this was not obvious to the > people testing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13026) Pending/running operation metrics are wrong
[ https://issues.apache.org/jira/browse/HIVE-13026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137855#comment-15137855 ] Jimmy Xiang commented on HIVE-13026: [~szehon]/[~aihuaxu], I tested this in my local HS2 and found these metrics don't decrease even after my client is closed for a while. > Pending/running operation metrics are wrong > --- > > Key: HIVE-13026 > URL: https://issues.apache.org/jira/browse/HIVE-13026 > Project: Hive > Issue Type: Bug >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > > A query is finished, however the pending/running operation count doesn't > decrease. > For example, in TestHs2Metrics::testMetrics(), we have > {noformat} > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, > "api_hs2_operation_PENDING", 1); > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, > "api_hs2_operation_RUNNING", 1); > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.COUNTER, > "hs2_completed_operation_FINISHED", 1); > {noformat} > Should it be below? > {noformat} > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, > "api_hs2_operation_PENDING", 0); > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, > "api_hs2_operation_RUNNING", 0); > MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.COUNTER, > "hs2_completed_operation_FINISHED", 1); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12993) user and password supplied from URL is overwritten by the empty user and password of the JDBC connection string when it's calling from beeline
[ https://issues.apache.org/jira/browse/HIVE-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137835#comment-15137835 ] Aihua Xu commented on HIVE-12993: - Test failures are not related. > user and password supplied from URL is overwritten by the empty user and > password of the JDBC connection string when it's calling from beeline > -- > > Key: HIVE-12993 > URL: https://issues.apache.org/jira/browse/HIVE-12993 > Project: Hive > Issue Type: Bug > Components: Beeline, JDBC >Affects Versions: 2.0.0, 2.1.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-12993.1.patch > > > When we make the call {{beeline -u > "jdbc:hive2://localhost:1/;user=aaa;password=bbb"}}, the user and > password are overwritten by the blank ones since internally it constructs a > "connect '' '' " call with empty user and password. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13025) need a better error message for when one needs to run schematool
[ https://issues.apache.org/jira/browse/HIVE-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13025: Description: Might as well fix it, since the RC is sunk and this was not obvious to the people testing it. (was: Might as well fix it, since the RC is sunk and it was not obvious when people were testing.) > need a better error message for when one needs to run schematool > > > Key: HIVE-13025 > URL: https://issues.apache.org/jira/browse/HIVE-13025 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > > Might as well fix it, since the RC is sunk and this was not obvious to the > people testing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13024) schematool does not log anywhere
[ https://issues.apache.org/jira/browse/HIVE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137819#comment-15137819 ] Prasanth Jayachandran commented on HIVE-13024: -- Done. Pushed to master as well. > schematool does not log anywhere > > > Key: HIVE-13024 > URL: https://issues.apache.org/jira/browse/HIVE-13024 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HIVE-13024.1.patch > > > When testing the new RC for 2.0.0 release, I tried using the schematool to > create the initial schema. While doing so encountered the following error > {code} > ./bin/schematool -initSchema -dbType mysql > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/tez/tez/tez-dist/target/tez-0.8.3-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Metastore connection URL: > jdbc:mysql://localhost/metastore-release?createDatabaseIfNotExist=true > Metastore Connection Driver : com.mysql.jdbc.Driver > Metastore connection User: hive > org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema > version. > *** schemaTool failed *** > {code} > I could not find the reason for this error as the log messages are not logged > to the log file. Logging seems to be not initialized properly for schematool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13024) schematool does not log anywhere
[ https://issues.apache.org/jira/browse/HIVE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13024: - Affects Version/s: 2.0.0 > schematool does not log anywhere > > > Key: HIVE-13024 > URL: https://issues.apache.org/jira/browse/HIVE-13024 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HIVE-13024.1.patch > > > When testing the new RC for 2.0.0 release, I tried using the schematool to > create the initial schema. While doing so encountered the following error > {code} > ./bin/schematool -initSchema -dbType mysql > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/tez/tez/tez-dist/target/tez-0.8.3-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Metastore connection URL: > jdbc:mysql://localhost/metastore-release?createDatabaseIfNotExist=true > Metastore Connection Driver : com.mysql.jdbc.Driver > Metastore connection User: hive > org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema > version. > *** schemaTool failed *** > {code} > I could not find the reason for this error as the log messages are not logged > to the log file. Logging seems to be not initialized properly for schematool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13024) schematool does not log anywhere
[ https://issues.apache.org/jira/browse/HIVE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137818#comment-15137818 ] Sergey Shelukhin commented on HIVE-13024: - What about master? > schematool does not log anywhere > > > Key: HIVE-13024 > URL: https://issues.apache.org/jira/browse/HIVE-13024 > Project: Hive > Issue Type: Bug >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HIVE-13024.1.patch > > > When testing the new RC for 2.0.0 release, I tried using the schematool to > create the initial schema. While doing so encountered the following error > {code} > ./bin/schematool -initSchema -dbType mysql > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/tez/tez/tez-dist/target/tez-0.8.3-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Metastore connection URL: > jdbc:mysql://localhost/metastore-release?createDatabaseIfNotExist=true > Metastore Connection Driver : com.mysql.jdbc.Driver > Metastore connection User: hive > org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema > version. > *** schemaTool failed *** > {code} > I could not find the reason for this error as the log messages are not logged > to the log file. Logging seems to be not initialized properly for schematool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13024) schematool does not log anywhere
[ https://issues.apache.org/jira/browse/HIVE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran resolved HIVE-13024. -- Resolution: Fixed Fix Version/s: 2.0.0 Committed to branch-2.0 > schematool does not log anywhere > > > Key: HIVE-13024 > URL: https://issues.apache.org/jira/browse/HIVE-13024 > Project: Hive > Issue Type: Bug >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HIVE-13024.1.patch > > > When testing the new RC for 2.0.0 release, I tried using the schematool to > create the initial schema. While doing so encountered the following error > {code} > ./bin/schematool -initSchema -dbType mysql > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/tez/tez/tez-dist/target/tez-0.8.3-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Metastore connection URL: > jdbc:mysql://localhost/metastore-release?createDatabaseIfNotExist=true > Metastore Connection Driver : com.mysql.jdbc.Driver > Metastore connection User: hive > org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema > version. > *** schemaTool failed *** > {code} > I could not find the reason for this error as the log messages are not logged > to the log file. Logging seems to be not initialized properly for schematool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12987) Add metrics for HS2 active users and SQL operations
[ https://issues.apache.org/jira/browse/HIVE-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HIVE-12987: --- Attachment: HIVE-12987.4.patch Attached v4 that addressed some review comments, so that Operation::setMetrics is private, SQLOperation uses onNewState instead of overriding Operation::setMetrics, removed prevState member variable. > Add metrics for HS2 active users and SQL operations > --- > > Key: HIVE-12987 > URL: https://issues.apache.org/jira/browse/HIVE-12987 > Project: Hive > Issue Type: Task >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Attachments: HIVE-12987.1.patch, HIVE-12987.2.patch, > HIVE-12987.2.patch, HIVE-12987.3.patch, HIVE-12987.3.patch, HIVE-12987.4.patch > > > HIVE-12271 added metrics for all HS2 operations. Sometimes, users are also > interested in metrics just for SQL operations. > It is useful to track active user count as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13015) Update SLF4j version to 1.7.10
[ https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137747#comment-15137747 ] Sergey Shelukhin commented on HIVE-13015: - I also see bindings coming from the jdbc jar when CLI complains about multiple bindings. > Update SLF4j version to 1.7.10 > -- > > Key: HIVE-13015 > URL: https://issues.apache.org/jira/browse/HIVE-13015 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13015.1.patch > > > In some of the recent test runs, we are seeing multiple bindings for SLF4j > that causes issues with LOG4j2 logger. > {code} > SLF4J: Found binding in > [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > {code} > We have added explicit exclusions for slf4j-log4j12 but some library is > pulling it transitively and it's getting packaged with hive libs. Also hive > currently uses version 1.7.5 for slf4j. We should add dependency convergence > for sl4fj and also remove packaging of slf4j-log4j12.*.jar -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13015) Update SLF4j version to 1.7.10
[ https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137737#comment-15137737 ] Gopal V commented on HIVE-13015: In LLAP runs that work right, the SLF4J comes from jdbc-standalone.jar from lib/ > Update SLF4j version to 1.7.10 > -- > > Key: HIVE-13015 > URL: https://issues.apache.org/jira/browse/HIVE-13015 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13015.1.patch > > > In some of the recent test runs, we are seeing multiple bindings for SLF4j > that causes issues with LOG4j2 logger. > {code} > SLF4J: Found binding in > [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > {code} > We have added explicit exclusions for slf4j-log4j12 but some library is > pulling it transitively and it's getting packaged with hive libs. Also hive > currently uses version 1.7.5 for slf4j. We should add dependency convergence > for sl4fj and also remove packaging of slf4j-log4j12.*.jar -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13015) Update SLF4j version to 1.7.10
[ https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137733#comment-15137733 ] Prasanth Jayachandran commented on HIVE-13015: -- None of the logging jars are in hive-exec jar > Update SLF4j version to 1.7.10 > -- > > Key: HIVE-13015 > URL: https://issues.apache.org/jira/browse/HIVE-13015 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13015.1.patch > > > In some of the recent test runs, we are seeing multiple bindings for SLF4j > that causes issues with LOG4j2 logger. > {code} > SLF4J: Found binding in > [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > {code} > We have added explicit exclusions for slf4j-log4j12 but some library is > pulling it transitively and it's getting packaged with hive libs. Also hive > currently uses version 1.7.5 for slf4j. We should add dependency convergence > for sl4fj and also remove packaging of slf4j-log4j12.*.jar -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13024) schematool does not log anywhere
[ https://issues.apache.org/jira/browse/HIVE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137727#comment-15137727 ] Sergey Shelukhin commented on HIVE-13024: - +1 > schematool does not log anywhere > > > Key: HIVE-13024 > URL: https://issues.apache.org/jira/browse/HIVE-13024 > Project: Hive > Issue Type: Bug >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Attachments: HIVE-13024.1.patch > > > When testing the new RC for 2.0.0 release, I tried using the schematool to > create the initial schema. While doing so encountered the following error > {code} > ./bin/schematool -initSchema -dbType mysql > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/tez/tez/tez-dist/target/tez-0.8.3-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Metastore connection URL: > jdbc:mysql://localhost/metastore-release?createDatabaseIfNotExist=true > Metastore Connection Driver : com.mysql.jdbc.Driver > Metastore connection User: hive > org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema > version. > *** schemaTool failed *** > {code} > I could not find the reason for this error as the log messages are not logged > to the log file. Logging seems to be not initialized properly for schematool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12941) Unexpected result when using MIN() on struct with NULL in first field
[ https://issues.apache.org/jira/browse/HIVE-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137729#comment-15137729 ] Hive QA commented on HIVE-12941: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12786696/HIVE-12941.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10038 tests executed *Failed tests:* {noformat} TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hive.jdbc.TestSSL.testSSLVersion {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6908/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6908/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6908/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12786696 - PreCommit-HIVE-TRUNK-Build > Unexpected result when using MIN() on struct with NULL in first field > - > > Key: HIVE-12941 > URL: https://issues.apache.org/jira/browse/HIVE-12941 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.1.0 >Reporter: Jan-Erik Hedbom >Assignee: Yongzhi Chen > Attachments: HIVE-12941.1.patch, HIVE-12941.2.patch > > > Using MIN() on struct with NULL in first field of a row yields NULL as result. > Example: > select min(a) FROM (select 1 as a union all select 2 as a union all select > cast(null as int) as a) tmp; > OK > _c0 > 1 > As expected. But if we wrap it in a struct: > select min(a) FROM (select named_struct("field",1) as a union all select > named_struct("field",2) as a union all select named_struct("field",cast(null > as int)) as a) tmp; > OK > _c0 > NULL > Using MAX() works as expected for structs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13024) schematool does not log anywhere
[ https://issues.apache.org/jira/browse/HIVE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137725#comment-15137725 ] Prasanth Jayachandran commented on HIVE-13024: -- [~sershe] Could you take a look? > schematool does not log anywhere > > > Key: HIVE-13024 > URL: https://issues.apache.org/jira/browse/HIVE-13024 > Project: Hive > Issue Type: Bug >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Attachments: HIVE-13024.1.patch > > > When testing the new RC for 2.0.0 release, I tried using the schematool to > create the initial schema. While doing so encountered the following error > {code} > ./bin/schematool -initSchema -dbType mysql > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/tez/tez/tez-dist/target/tez-0.8.3-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Metastore connection URL: > jdbc:mysql://localhost/metastore-release?createDatabaseIfNotExist=true > Metastore Connection Driver : com.mysql.jdbc.Driver > Metastore connection User: hive > org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema > version. > *** schemaTool failed *** > {code} > I could not find the reason for this error as the log messages are not logged > to the log file. Logging seems to be not initialized properly for schematool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13024) schematool does not log anywhere
[ https://issues.apache.org/jira/browse/HIVE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13024: - Attachment: HIVE-13024.1.patch > schematool does not log anywhere > > > Key: HIVE-13024 > URL: https://issues.apache.org/jira/browse/HIVE-13024 > Project: Hive > Issue Type: Bug >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Attachments: HIVE-13024.1.patch > > > When testing the new RC for 2.0.0 release, I tried using the schematool to > create the initial schema. While doing so encountered the following error > {code} > ./bin/schematool -initSchema -dbType mysql > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/tez/tez/tez-dist/target/tez-0.8.3-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Metastore connection URL: > jdbc:mysql://localhost/metastore-release?createDatabaseIfNotExist=true > Metastore Connection Driver : com.mysql.jdbc.Driver > Metastore connection User: hive > org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema > version. > *** schemaTool failed *** > {code} > I could not find the reason for this error as the log messages are not logged > to the log file. Logging seems to be not initialized properly for schematool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-13024) schematool does not log anywhere
[ https://issues.apache.org/jira/browse/HIVE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-13024: Assignee: Prasanth Jayachandran > schematool does not log anywhere > > > Key: HIVE-13024 > URL: https://issues.apache.org/jira/browse/HIVE-13024 > Project: Hive > Issue Type: Bug >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > > When testing the new RC for 2.0.0 release, I tried using the schematool to > create the initial schema. While doing so encountered the following error > {code} > ./bin/schematool -initSchema -dbType mysql > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/tez/tez/tez-dist/target/tez-0.8.3-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Metastore connection URL: > jdbc:mysql://localhost/metastore-release?createDatabaseIfNotExist=true > Metastore Connection Driver : com.mysql.jdbc.Driver > Metastore connection User: hive > org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema > version. > *** schemaTool failed *** > {code} > I could not find the reason for this error as the log messages are not logged > to the log file. Logging seems to be not initialized properly for schematool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-13023) Unable to create tables using "STORED AS"
[ https://issues.apache.org/jira/browse/HIVE-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-13023: --- Assignee: Sergey Shelukhin > Unable to create tables using "STORED AS" > - > > Key: HIVE-13023 > URL: https://issues.apache.org/jira/browse/HIVE-13023 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Prasanth Jayachandran >Assignee: Sergey Shelukhin >Priority: Blocker > > When testing the new RC for 2.0.0 release, I got the following exception when > creating ORC table > {code} > hive> > > create table src_orc(k string, v int) stored as orc; > Exception in thread "b3a2d83b-bdc2-46f4-82c0-eb79d59590d9 > b3a2d83b-bdc2-46f4-82c0-eb79d59590d9 main" java.lang.AssertionError: Unknown > token: [@-1,0:0='TOK_FILEFORMAT_GENERIC',<715>,0:-1] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:10875) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:9989) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10093) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:229) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:319) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1255) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1301) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1184) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:778) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:717) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:645) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13024) schematool does not log anywhere
[ https://issues.apache.org/jira/browse/HIVE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13024: - Priority: Blocker (was: Major) > schematool does not log anywhere > > > Key: HIVE-13024 > URL: https://issues.apache.org/jira/browse/HIVE-13024 > Project: Hive > Issue Type: Bug >Reporter: Prasanth Jayachandran >Priority: Blocker > > When testing the new RC for 2.0.0 release, I tried using the schematool to > create the initial schema. While doing so encountered the following error > {code} > ./bin/schematool -initSchema -dbType mysql > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/hive/release/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/work/tez/tez/tez-dist/target/tez-0.8.3-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Metastore connection URL: > jdbc:mysql://localhost/metastore-release?createDatabaseIfNotExist=true > Metastore Connection Driver : com.mysql.jdbc.Driver > Metastore connection User: hive > org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema > version. > *** schemaTool failed *** > {code} > I could not find the reason for this error as the log messages are not logged > to the log file. Logging seems to be not initialized properly for schematool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9862) Vectorized execution corrupts timestamp values
[ https://issues.apache.org/jira/browse/HIVE-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137587#comment-15137587 ] Matt McCline commented on HIVE-9862: [~jdere] Thank You! > Vectorized execution corrupts timestamp values > -- > > Key: HIVE-9862 > URL: https://issues.apache.org/jira/browse/HIVE-9862 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 1.0.0 >Reporter: Nathan Howell >Assignee: Matt McCline > Attachments: HIVE-9862.01.patch, HIVE-9862.02.patch, > HIVE-9862.03.patch, HIVE-9862.04.patch, HIVE-9862.05.patch, > HIVE-9862.06.patch, HIVE-9862.07.patch, HIVE-9862.08.patch, HIVE-9862.09.patch > > > Timestamps in the future (year 2250?) and before ~1700 are silently corrupted > in vectorized execution mode. Simple repro: > {code} > hive> DROP TABLE IF EXISTS test; > hive> CREATE TABLE test(ts TIMESTAMP) STORED AS ORC; > hive> INSERT INTO TABLE test VALUES ('-12-31 23:59:59'); > hive> SET hive.vectorized.execution.enabled = false; > hive> SELECT MAX(ts) FROM test; > -12-31 23:59:59 > hive> SET hive.vectorized.execution.enabled = true; > hive> SELECT MAX(ts) FROM test; > 1816-03-30 05:56:07.066277376 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12790) Metastore connection leaks in HiveServer2
[ https://issues.apache.org/jira/browse/HIVE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137498#comment-15137498 ] Sergey Shelukhin commented on HIVE-12790: - Assuming the current RC passes, this will not make it into 2.0 as it was committed after the RC was cut. > Metastore connection leaks in HiveServer2 > - > > Key: HIVE-12790 > URL: https://issues.apache.org/jira/browse/HIVE-12790 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 1.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-12790.2.patch, HIVE-12790.3.patch, > HIVE-12790.patch, snippedLog.txt > > > HiveServer2 keeps opening new connections to HMS each time it launches a > task. These connections do not appear to be closed when the task completes > thus causing a HMS connection leak. "lsof" for the HS2 process shows > connections to port 9083. > {code} > 2015-12-03 04:20:56,352 INFO [HiveServer2-Background-Pool: Thread-424756()]: > ql.Driver (SessionState.java:printInfo(558)) - Launching Job 11 out of 41 > 2015-12-03 04:20:56,354 INFO [Thread-405728()]: hive.metastore > (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with > URI thrift://:9083 > 2015-12-03 04:20:56,360 INFO [Thread-405728()]: hive.metastore > (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, > current connections: 14824 > 2015-12-03 04:20:56,360 INFO [Thread-405728()]: hive.metastore > (HiveMetaStoreClient.java:open(400)) - Connected to metastore. > > 2015-12-03 04:21:06,355 INFO [HiveServer2-Background-Pool: Thread-424756()]: > ql.Driver (SessionState.java:printInfo(558)) - Launching Job 12 out of 41 > 2015-12-03 04:21:06,357 INFO [Thread-405756()]: hive.metastore > (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with > URI thrift://:9083 > 2015-12-03 04:21:06,362 INFO [Thread-405756()]: hive.metastore > (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, > current connections: 14825 > 2015-12-03 04:21:06,362 INFO [Thread-405756()]: hive.metastore > (HiveMetaStoreClient.java:open(400)) - Connected to metastore. > ... > 2015-12-03 04:21:08,357 INFO [HiveServer2-Background-Pool: Thread-424756()]: > ql.Driver (SessionState.java:printInfo(558)) - Launching Job 13 out of 41 > 2015-12-03 04:21:08,360 INFO [Thread-405782()]: hive.metastore > (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with > URI thrift://:9083 > 2015-12-03 04:21:08,364 INFO [Thread-405782()]: hive.metastore > (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, > current connections: 14826 > 2015-12-03 04:21:08,365 INFO [Thread-405782()]: hive.metastore > (HiveMetaStoreClient.java:open(400)) - Connected to metastore. > ... > {code} > The TaskRunner thread starts a new SessionState each time, which creates a > new connection to the HMS (via Hive.get(conf).getMSC()) that is never closed. > Even SessionState.close(), currently not being called by the TaskRunner > thread, does not close this connection. > Attaching a anonymized log snippet where the number of HMS connections > reaches north of 25000+ connections. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12790) Metastore connection leaks in HiveServer2
[ https://issues.apache.org/jira/browse/HIVE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-12790: Fix Version/s: (was: 2.0.0) > Metastore connection leaks in HiveServer2 > - > > Key: HIVE-12790 > URL: https://issues.apache.org/jira/browse/HIVE-12790 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 1.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-12790.2.patch, HIVE-12790.3.patch, > HIVE-12790.patch, snippedLog.txt > > > HiveServer2 keeps opening new connections to HMS each time it launches a > task. These connections do not appear to be closed when the task completes > thus causing a HMS connection leak. "lsof" for the HS2 process shows > connections to port 9083. > {code} > 2015-12-03 04:20:56,352 INFO [HiveServer2-Background-Pool: Thread-424756()]: > ql.Driver (SessionState.java:printInfo(558)) - Launching Job 11 out of 41 > 2015-12-03 04:20:56,354 INFO [Thread-405728()]: hive.metastore > (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with > URI thrift://:9083 > 2015-12-03 04:20:56,360 INFO [Thread-405728()]: hive.metastore > (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, > current connections: 14824 > 2015-12-03 04:20:56,360 INFO [Thread-405728()]: hive.metastore > (HiveMetaStoreClient.java:open(400)) - Connected to metastore. > > 2015-12-03 04:21:06,355 INFO [HiveServer2-Background-Pool: Thread-424756()]: > ql.Driver (SessionState.java:printInfo(558)) - Launching Job 12 out of 41 > 2015-12-03 04:21:06,357 INFO [Thread-405756()]: hive.metastore > (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with > URI thrift://:9083 > 2015-12-03 04:21:06,362 INFO [Thread-405756()]: hive.metastore > (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, > current connections: 14825 > 2015-12-03 04:21:06,362 INFO [Thread-405756()]: hive.metastore > (HiveMetaStoreClient.java:open(400)) - Connected to metastore. > ... > 2015-12-03 04:21:08,357 INFO [HiveServer2-Background-Pool: Thread-424756()]: > ql.Driver (SessionState.java:printInfo(558)) - Launching Job 13 out of 41 > 2015-12-03 04:21:08,360 INFO [Thread-405782()]: hive.metastore > (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with > URI thrift://:9083 > 2015-12-03 04:21:08,364 INFO [Thread-405782()]: hive.metastore > (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, > current connections: 14826 > 2015-12-03 04:21:08,365 INFO [Thread-405782()]: hive.metastore > (HiveMetaStoreClient.java:open(400)) - Connected to metastore. > ... > {code} > The TaskRunner thread starts a new SessionState each time, which creates a > new connection to the HMS (via Hive.get(conf).getMSC()) that is never closed. > Even SessionState.close(), currently not being called by the TaskRunner > thread, does not close this connection. > Attaching a anonymized log snippet where the number of HMS connections > reaches north of 25000+ connections. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9862) Vectorized execution corrupts timestamp values
[ https://issues.apache.org/jira/browse/HIVE-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137492#comment-15137492 ] Jason Dere commented on HIVE-9862: -- +1 > Vectorized execution corrupts timestamp values > -- > > Key: HIVE-9862 > URL: https://issues.apache.org/jira/browse/HIVE-9862 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 1.0.0 >Reporter: Nathan Howell >Assignee: Matt McCline > Attachments: HIVE-9862.01.patch, HIVE-9862.02.patch, > HIVE-9862.03.patch, HIVE-9862.04.patch, HIVE-9862.05.patch, > HIVE-9862.06.patch, HIVE-9862.07.patch, HIVE-9862.08.patch, HIVE-9862.09.patch > > > Timestamps in the future (year 2250?) and before ~1700 are silently corrupted > in vectorized execution mode. Simple repro: > {code} > hive> DROP TABLE IF EXISTS test; > hive> CREATE TABLE test(ts TIMESTAMP) STORED AS ORC; > hive> INSERT INTO TABLE test VALUES ('-12-31 23:59:59'); > hive> SET hive.vectorized.execution.enabled = false; > hive> SELECT MAX(ts) FROM test; > -12-31 23:59:59 > hive> SET hive.vectorized.execution.enabled = true; > hive> SELECT MAX(ts) FROM test; > 1816-03-30 05:56:07.066277376 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12441) Driver.acquireLocksAndOpenTxn() should only call recordValidTxns() when needed
[ https://issues.apache.org/jira/browse/HIVE-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-12441: - Attachment: HIVE-12441.2.patch patch 2 for test > Driver.acquireLocksAndOpenTxn() should only call recordValidTxns() when needed > -- > > Key: HIVE-12441 > URL: https://issues.apache.org/jira/browse/HIVE-12441 > Project: Hive > Issue Type: Bug > Components: CLI, Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng > Attachments: HIVE-12441.1.patch, HIVE-12441.2.patch > > > recordValidTxns() is only needed if ACID tables are part of the query. > Otherwise it's just overhead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13009) Fix add_jar_file.q on Windows
[ https://issues.apache.org/jira/browse/HIVE-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137470#comment-15137470 ] Jason Dere commented on HIVE-13009: --- Test failures not related, as patch only changes a q-file test > Fix add_jar_file.q on Windows > - > > Key: HIVE-13009 > URL: https://issues.apache.org/jira/browse/HIVE-13009 > Project: Hive > Issue Type: Bug > Components: Tests, Windows >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-13009.1.patch > > > Forward slashes in the local file path don't work for Windows. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12999) Tez: Vertex creation is slowed down when NN throttles IPCs
[ https://issues.apache.org/jira/browse/HIVE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137454#comment-15137454 ] Sergey Shelukhin commented on HIVE-12999: - Test failures look unrelated. > Tez: Vertex creation is slowed down when NN throttles IPCs > -- > > Key: HIVE-12999 > URL: https://issues.apache.org/jira/browse/HIVE-12999 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 1.2.0, 1.3.0, 2.0.0, 2.1.0 >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-12999.1.patch > > > Tez vertex building has a decidedly slow path in the code, which is not > related to the DAG plan at all. > The total number of RPC calls is not related to the total number of > operators, due to a bug in the DagUtils inner loops. > {code} > at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877) > at > org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3207) > at > org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3170) > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:548) > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:1151) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:388) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:175) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12987) Add metrics for HS2 active users and SQL operations
[ https://issues.apache.org/jira/browse/HIVE-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137420#comment-15137420 ] Hive QA commented on HIVE-12987: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12786695/HIVE-12987.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 10053 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_stats org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_correlationoptimizer1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_delete_where_non_partitioned org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_join_nullsafe org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_nonmr_fetch_threshold org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge5 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_ppd_basic org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_dynpart_hashjoin_1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_decimal_5 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_decimal_6 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_interval_1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorization_2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorization_9 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorization_limit org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_timestamp_ints_casts org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hive.jdbc.TestSSL.testSSLVersion {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6907/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6907/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6907/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 18 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12786695 - PreCommit-HIVE-TRUNK-Build > Add metrics for HS2 active users and SQL operations > --- > > Key: HIVE-12987 > URL: https://issues.apache.org/jira/browse/HIVE-12987 > Project: Hive > Issue Type: Task >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Attachments: HIVE-12987.1.patch, HIVE-12987.2.patch, > HIVE-12987.2.patch, HIVE-12987.3.patch, HIVE-12987.3.patch > > > HIVE-12271 added metrics for all HS2 operations. Sometimes, users are also > interested in metrics just for SQL operations. > It is useful to track active user count as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12274) Increase width of columns used for general configuration in the metastore.
[ https://issues.apache.org/jira/browse/HIVE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137422#comment-15137422 ] Sergey Shelukhin commented on HIVE-12274: - The standard advise for this case is to change the struct to Hive columns; this case should already be fixed for external schemas. It is a valid concern though... cc [~thejas] > Increase width of columns used for general configuration in the metastore. > -- > > Key: HIVE-12274 > URL: https://issues.apache.org/jira/browse/HIVE-12274 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 2.0.0 >Reporter: Elliot West >Assignee: Sushanth Sowmyan > Labels: metastore > Attachments: HIVE-12274.example.ddl.hql > > > This issue is very similar in principle to HIVE-1364. We are hitting a limit > when processing JSON data that has a large nested schema. The struct > definition is truncated when inserted into the metastore database column > {{COLUMNS_V2.YPE_NAME}} as it is greater than 4000 characters in length. > Given that the purpose of these columns is to hold very loosely defined > configuration values it seems rather limiting to impose such a relatively low > length bound. One can imagine that valid use cases will arise where > reasonable parameter/property values exceed the current limit. Can these > columns not use CLOB-like types as for example as used by > {{TBLS.VIEW_EXPANDED_TEXT}}? It would seem that suitable type equivalents > exist for all targeted database platforms: > * MySQL: {{mediumtext}} > * Postgres: {{text}} > * Oracle: {{CLOB}} > * Derby: {{LONG VARCHAR}} > I'd suggest that the candidates for type change are: > * {{COLUMNS_V2.TYPE_NAME}} > * {{TABLE_PARAMS.PARAM_VALUE}} > * {{SERDE_PARAMS.PARAM_VALUE}} > * {{SD_PARAMS.PARAM_VALUE}} > Finally, will this limitation persist in the work resulting from HIVE-9452? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13015) Update SLF4j version to 1.7.10
[ https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137370#comment-15137370 ] Gopal V commented on HIVE-13015: I discovered that the hive-exec jar doesn't include the log4j-slf4j-impl. > Update SLF4j version to 1.7.10 > -- > > Key: HIVE-13015 > URL: https://issues.apache.org/jira/browse/HIVE-13015 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13015.1.patch > > > In some of the recent test runs, we are seeing multiple bindings for SLF4j > that causes issues with LOG4j2 logger. > {code} > SLF4J: Found binding in > [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > {code} > We have added explicit exclusions for slf4j-log4j12 but some library is > pulling it transitively and it's getting packaged with hive libs. Also hive > currently uses version 1.7.5 for slf4j. We should add dependency convergence > for sl4fj and also remove packaging of slf4j-log4j12.*.jar -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9534) incorrect result set for query that projects a windowed aggregate
[ https://issues.apache.org/jira/browse/HIVE-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-9534: --- Attachment: HIVE-9534.3.patch Attached patch-3: other UDAF like RANK() got affected. > incorrect result set for query that projects a windowed aggregate > - > > Key: HIVE-9534 > URL: https://issues.apache.org/jira/browse/HIVE-9534 > Project: Hive > Issue Type: Bug > Components: PTF-Windowing >Reporter: N Campbell >Assignee: Aihua Xu > Attachments: HIVE-9534.1.patch, HIVE-9534.2.patch, HIVE-9534.3.patch > > > Result set returned by Hive has one row instead of 5 > {code} > select avg(distinct tsint.csint) over () from tsint > create table if not exists TSINT (RNUM int , CSINT smallint) > ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' > STORED AS TEXTFILE; > 0|\N > 1|-1 > 2|0 > 3|1 > 4|10 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST
[ https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-12994: --- Attachment: HIVE-12994.02.patch Regenerated .q files; fixed an issue with ReduceOpDedup and another issue with the metastore upgrade script. Next QA should come clean. [~jpullokkaran], could you take a look? Thanks > Implement support for NULLS FIRST/NULLS LAST > > > Key: HIVE-12994 > URL: https://issues.apache.org/jira/browse/HIVE-12994 > Project: Hive > Issue Type: New Feature > Components: CBO, Metastore, Parser, Serializers/Deserializers >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-12994.01.patch, HIVE-12994.02.patch, > HIVE-12994.patch > > > From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to > determine whether nulls appear before or after non-null data values when the > ORDER BY clause is used. > SQL standard does not specify the behavior by default. Currently in Hive, > null values sort as if lower than any non-null value; that is, NULLS FIRST is > the default for ASC order, and NULLS LAST for DESC order. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10632) Make sure TXN_COMPONENTS gets cleaned up if table is dropped before compaction.
[ https://issues.apache.org/jira/browse/HIVE-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137204#comment-15137204 ] Hive QA commented on HIVE-10632: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12786654/HIVE-10632.2.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 84 failed/errored test(s), 9795 tests executed *Failed tests:* {noformat} TestCLIAuthzSessionContext - did not produce a TEST-*.xml file TestColumnAccess - did not produce a TEST-*.xml file TestE2EScenarios - did not produce a TEST-*.xml file TestHBaseMetastoreSql - did not produce a TEST-*.xml file TestHCatLoader - did not produce a TEST-*.xml file TestHCatLoaderComplexSchema - did not produce a TEST-*.xml file TestHCatLoaderEncryption - did not produce a TEST-*.xml file TestHiveAuthorizerShowFilters - did not produce a TEST-*.xml file TestHooks - did not produce a TEST-*.xml file TestMetastoreVersion - did not produce a TEST-*.xml file TestOperators - did not produce a TEST-*.xml file TestPermsGrp - did not produce a TEST-*.xml file TestReadEntityDirect - did not produce a TEST-*.xml file TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more - did not produce a TEST-*.xml file TestSymlinkTextInputFormat - did not produce a TEST-*.xml file TestViewEntity - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lock1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lock2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lock3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lock4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_set_metaconf org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_delete_not_acid org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_insert_into1 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_insert_into2 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_insert_into3 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_insert_into4 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lockneg1 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lockneg2 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lockneg3 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lockneg4 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lockneg5 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lockneg_query_tbl_in_locked_db org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lockneg_try_db_lock_conflict org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lockneg_try_drop_locked_db org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lockneg_try_lock_db_in_use org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_update_not_acid org.apache.hadoop.hive.metastore.TestFilterHooks.testDefaultFilter org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.testConnections org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropTable org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropView org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccess org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccessWithReadOnly org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.droppedPartition org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.droppedTable org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.droppedPartition org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.droppedTable org.apache.hadoop.hive.ql.txn.compactor.TestWorker.droppedPartition org.apache.hadoop.hive.ql.txn.compactor.TestWorker.droppedTable org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.droppedPartition org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.droppedTable org.apache.hive.beeline.TestBeeLineWithArgs.org.apache.hive.beeline.TestBeeLineWithArgs org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.testPigFilterProjection org.apache.hive.hcatalog.h
[jira] [Updated] (HIVE-12592) Expose connection pool tuning props in TxnHandler
[ https://issues.apache.org/jira/browse/HIVE-12592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-12592: -- Assignee: Chetna Chaudhari (was: Eugene Koifman) > Expose connection pool tuning props in TxnHandler > - > > Key: HIVE-12592 > URL: https://issues.apache.org/jira/browse/HIVE-12592 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Chetna Chaudhari > > BoneCP allows various pool tuning options like connection timeout, num > connections, etc > There should be a config based way to set these -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12592) Expose connection pool tuning props in TxnHandler
[ https://issues.apache.org/jira/browse/HIVE-12592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137163#comment-15137163 ] Eugene Koifman commented on HIVE-12592: --- [~chetna] please go ahead. Bonecp supports supplying config via bonecp-config.xml. If you could add way for user to specify the location of this file, that would be great. > Expose connection pool tuning props in TxnHandler > - > > Key: HIVE-12592 > URL: https://issues.apache.org/jira/browse/HIVE-12592 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > BoneCP allows various pool tuning options like connection timeout, num > connections, etc > There should be a config based way to set these -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12592) Expose connection pool tuning props in TxnHandler
[ https://issues.apache.org/jira/browse/HIVE-12592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-12592: -- Target Version/s: 1.3.0, 2.1.0 > Expose connection pool tuning props in TxnHandler > - > > Key: HIVE-12592 > URL: https://issues.apache.org/jira/browse/HIVE-12592 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Chetna Chaudhari > > BoneCP allows various pool tuning options like connection timeout, num > connections, etc > There should be a config based way to set these -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10115) HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and Delegation token(DIGEST) when alternate authentication is enabled
[ https://issues.apache.org/jira/browse/HIVE-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137161#comment-15137161 ] Sergio Peña commented on HIVE-10115: Thanks [~leftylev]. The wiki contains the information needed. Our current documentation did not specify that LDAP+Kerberos were incompatible, so this patch fixes such bug. > HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and > Delegation token(DIGEST) when alternate authentication is enabled > --- > > Key: HIVE-10115 > URL: https://issues.apache.org/jira/browse/HIVE-10115 > Project: Hive > Issue Type: Improvement > Components: Authentication >Affects Versions: 1.1.0 >Reporter: Mubashir Kazia >Assignee: Mubashir Kazia > Labels: patch > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-10115.0.patch, HIVE-10115.2.patch > > > In a Kerberized cluster when alternate authentication is enabled on HS2, it > should also accept Kerberos Authentication. The reason this is important is > because when we enable LDAP authentication HS2 stops accepting delegation > token authentication. So we are forced to enter username passwords in the > oozie configuration. > The whole idea of SASL is that multiple authentication mechanism can be > offered. If we disable Kerberos(GSSAPI) and delegation token (DIGEST) > authentication when we enable LDAP authentication, this defeats SASL purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9545) Build FAILURE with IBM JVM
[ https://issues.apache.org/jira/browse/HIVE-9545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137134#comment-15137134 ] Greg Senia commented on HIVE-9545: -- Any way we can get these integrated into Hive??? If there are issues getting it integrated please let me know and I will have a discussion with some folks that could hopefully influence gettng these IBM JDK related fixes for Hadoop into trunk.. > Build FAILURE with IBM JVM > --- > > Key: HIVE-9545 > URL: https://issues.apache.org/jira/browse/HIVE-9545 > Project: Hive > Issue Type: Bug >Affects Versions: 0.14.0 > Environment: mvn -version > Apache Maven 3.2.3 (33f8c3e1027c3ddde99d3cdebad2656a31e8fdf4; > 2014-08-11T22:58:10+02:00) > Maven home: /opt/apache-maven-3.2.3 > Java version: 1.7.0, vendor: IBM Corporation > Java home: /usr/lib/jvm/ibm-java-x86_64-71/jre > Default locale: en_US, platform encoding: ISO-8859-1 > OS name: "linux", version: "3.10.0-123.4.4.el7.x86_64", arch: "amd64", > family: "unix" >Reporter: pascal oliva >Assignee: Navis > Attachments: HIVE-9545.1.patch.txt > > > NO PRECOMMIT TESTS > With the use of IBM JVM environment : > [root@dorado-vm2 hive]# java -version > java version "1.7.0" > Java(TM) SE Runtime Environment (build pxa6470_27sr2-20141026_01(SR2)) > IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References > 20141017_217728 (JIT enabled, AOT enabled). > The build failed on > [INFO] Hive Query Language FAILURE [ 50.053 > s] > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hive-exec: Compilation failure: Compilation failure: > [ERROR] > /home/pascal/hive0.14/hive/ql/src/java/org/apache/hadoop/hive/ql/debug/Utils.java:[29,26] > package com.sun.management does not exist. > HOWTO : > #git clone -b branch-0.14 https://github.com/apache/hive.git > #cd hive > #mvn install -DskipTests -Phadoop-2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-1608) use sequencefile as the default for storing intermediate results
[ https://issues.apache.org/jira/browse/HIVE-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137128#comment-15137128 ] Chaoyu Tang commented on HIVE-1608: --- [~brocknoland], [~ashutoshc] could you review the patch at your earliest convenience given that so many test files have been changed? Thanks in advanced. The change is straightforward: 1. code: change hive.query.result.fileformat default to use SequenceFilenew in order to default support new line character in column. 2. test output files: change all input and output format of FileSinkOperator to SequenceFileInputFormat and SequenceFileOutputFormat. Tests: 1. Some manual tests which also includes "insert overwrite [local] directory" that is actually not affected by this change 2. Precommit tests. > use sequencefile as the default for storing intermediate results > > > Key: HIVE-1608 > URL: https://issues.apache.org/jira/browse/HIVE-1608 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Brock Noland > Attachments: HIVE-1608.1.patch, HIVE-1608.2.patch, HIVE-1608.3.patch, > HIVE-1608.patch > > > The only argument for having a text file for storing intermediate results > seems to be better debuggability. > But, tailing a sequence file is possible, and it should be more space > efficient -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-1608) use sequencefile as the default for storing intermediate results
[ https://issues.apache.org/jira/browse/HIVE-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chaoyu Tang updated HIVE-1608: -- Attachment: HIVE-1608.3.patch Attached a new patch for failed tests. The failures of testAddJarDataNucleusUnCaching, TestPigHBaseStorageHandler and folder_predicate.q seems not related to this patch and they could not be reproduced in my local machine. > use sequencefile as the default for storing intermediate results > > > Key: HIVE-1608 > URL: https://issues.apache.org/jira/browse/HIVE-1608 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Brock Noland > Attachments: HIVE-1608.1.patch, HIVE-1608.2.patch, HIVE-1608.3.patch, > HIVE-1608.patch > > > The only argument for having a text file for storing intermediate results > seems to be better debuggability. > But, tailing a sequence file is possible, and it should be more space > efficient -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12274) Increase width of columns used for general configuration in the metastore.
[ https://issues.apache.org/jira/browse/HIVE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137025#comment-15137025 ] Elliot West commented on HIVE-12274: Also note, the supplied example will fail whether or not the SerDe is specified, therefore I do not believe the SerDe implementation is a factor. > Increase width of columns used for general configuration in the metastore. > -- > > Key: HIVE-12274 > URL: https://issues.apache.org/jira/browse/HIVE-12274 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 2.0.0 >Reporter: Elliot West >Assignee: Sushanth Sowmyan > Labels: metastore > Attachments: HIVE-12274.example.ddl.hql > > > This issue is very similar in principle to HIVE-1364. We are hitting a limit > when processing JSON data that has a large nested schema. The struct > definition is truncated when inserted into the metastore database column > {{COLUMNS_V2.YPE_NAME}} as it is greater than 4000 characters in length. > Given that the purpose of these columns is to hold very loosely defined > configuration values it seems rather limiting to impose such a relatively low > length bound. One can imagine that valid use cases will arise where > reasonable parameter/property values exceed the current limit. Can these > columns not use CLOB-like types as for example as used by > {{TBLS.VIEW_EXPANDED_TEXT}}? It would seem that suitable type equivalents > exist for all targeted database platforms: > * MySQL: {{mediumtext}} > * Postgres: {{text}} > * Oracle: {{CLOB}} > * Derby: {{LONG VARCHAR}} > I'd suggest that the candidates for type change are: > * {{COLUMNS_V2.TYPE_NAME}} > * {{TABLE_PARAMS.PARAM_VALUE}} > * {{SERDE_PARAMS.PARAM_VALUE}} > * {{SD_PARAMS.PARAM_VALUE}} > Finally, will this limitation persist in the work resulting from HIVE-9452? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12274) Increase width of columns used for general configuration in the metastore.
[ https://issues.apache.org/jira/browse/HIVE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137023#comment-15137023 ] Elliot West commented on HIVE-12274: Note that although the schema given in the example is contrived, we do see real world schemas that easily exceed the upper length bound. I'm not in a position to publicly share them however. > Increase width of columns used for general configuration in the metastore. > -- > > Key: HIVE-12274 > URL: https://issues.apache.org/jira/browse/HIVE-12274 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 2.0.0 >Reporter: Elliot West >Assignee: Sushanth Sowmyan > Labels: metastore > Attachments: HIVE-12274.example.ddl.hql > > > This issue is very similar in principle to HIVE-1364. We are hitting a limit > when processing JSON data that has a large nested schema. The struct > definition is truncated when inserted into the metastore database column > {{COLUMNS_V2.YPE_NAME}} as it is greater than 4000 characters in length. > Given that the purpose of these columns is to hold very loosely defined > configuration values it seems rather limiting to impose such a relatively low > length bound. One can imagine that valid use cases will arise where > reasonable parameter/property values exceed the current limit. Can these > columns not use CLOB-like types as for example as used by > {{TBLS.VIEW_EXPANDED_TEXT}}? It would seem that suitable type equivalents > exist for all targeted database platforms: > * MySQL: {{mediumtext}} > * Postgres: {{text}} > * Oracle: {{CLOB}} > * Derby: {{LONG VARCHAR}} > I'd suggest that the candidates for type change are: > * {{COLUMNS_V2.TYPE_NAME}} > * {{TABLE_PARAMS.PARAM_VALUE}} > * {{SERDE_PARAMS.PARAM_VALUE}} > * {{SD_PARAMS.PARAM_VALUE}} > Finally, will this limitation persist in the work resulting from HIVE-9452? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12274) Increase width of columns used for general configuration in the metastore.
[ https://issues.apache.org/jira/browse/HIVE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliot West updated HIVE-12274: --- Attachment: HIVE-12274.example.ddl.hql > Increase width of columns used for general configuration in the metastore. > -- > > Key: HIVE-12274 > URL: https://issues.apache.org/jira/browse/HIVE-12274 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 2.0.0 >Reporter: Elliot West >Assignee: Sushanth Sowmyan > Labels: metastore > Attachments: HIVE-12274.example.ddl.hql > > > This issue is very similar in principle to HIVE-1364. We are hitting a limit > when processing JSON data that has a large nested schema. The struct > definition is truncated when inserted into the metastore database column > {{COLUMNS_V2.YPE_NAME}} as it is greater than 4000 characters in length. > Given that the purpose of these columns is to hold very loosely defined > configuration values it seems rather limiting to impose such a relatively low > length bound. One can imagine that valid use cases will arise where > reasonable parameter/property values exceed the current limit. Can these > columns not use CLOB-like types as for example as used by > {{TBLS.VIEW_EXPANDED_TEXT}}? It would seem that suitable type equivalents > exist for all targeted database platforms: > * MySQL: {{mediumtext}} > * Postgres: {{text}} > * Oracle: {{CLOB}} > * Derby: {{LONG VARCHAR}} > I'd suggest that the candidates for type change are: > * {{COLUMNS_V2.TYPE_NAME}} > * {{TABLE_PARAMS.PARAM_VALUE}} > * {{SERDE_PARAMS.PARAM_VALUE}} > * {{SD_PARAMS.PARAM_VALUE}} > Finally, will this limitation persist in the work resulting from HIVE-9452? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12274) Increase width of columns used for general configuration in the metastore.
[ https://issues.apache.org/jira/browse/HIVE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137015#comment-15137015 ] Elliot West commented on HIVE-12274: Hi, I have attached some sample code to reproduce. The snippet below shows the error we see. Note that the ellipsis are my own addition for brevity: {code} hive> CREATE EXTERNAL TABLE IF NOT EXISTS mytests.mybigtable > ( > bigstruct STRUCT < > myField1 : STRING, ... > myField1000 : STRING > > > ) > PARTITIONED BY (yymmdd STRING) > ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.JsonSerde' > STORED AS SEQUENCEFILE > LOCATION '/path-to/mybigtable/'; OK Time taken: 4.86 seconds hive> desc mytests.mybigtable; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Error: type expected at the position 3996 of 'struct {code} > Increase width of columns used for general configuration in the metastore. > -- > > Key: HIVE-12274 > URL: https://issues.apache.org/jira/browse/HIVE-12274 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 2.0.0 >Reporter: Elliot West >Assignee: Sushanth Sowmyan > Labels: metastore > > This issue is very similar in principle to HIVE-1364. We are hitting a limit > when processing JSON data that has a large nested schema. The struct > definition is truncated when inserted into the metastore database column > {{COLUMNS_V2.YPE_NAME}} as it is greater than 4000 characters in length. > Given that the purpose of these columns is to hold very loosely defined > configuration values it seems rather limiting to impose such a relatively low > length bound. One can imagine that valid use cases will arise where > reasonable parameter/property values exceed the current limit. Can these > columns not use CLOB-like types as for example as used by > {{TBLS.VIEW_EXPANDED_TEXT}}? It would seem that suitable type equivalents > exist for all targeted database platforms: > * MySQL: {{mediumtext}} > * Postgres: {{text}} > * Oracle: {{CLOB}} > * Derby: {{LONG VARCHAR}} > I'd suggest that the candidates for type change are: > * {{COLUMNS_V2.TYPE_NAME}} > * {{TABLE_PARAMS.PARAM_VALUE}} > * {{SERDE_PARAMS.PARAM_VALUE}} > * {{SD_PARAMS.PARAM_VALUE}} > Finally, will this limitation persist in the work resulting from HIVE-9452? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11349) Update HBase metastore hbase version to 1.1.1
[ https://issues.apache.org/jira/browse/HIVE-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Carol updated HIVE-11349: Component/s: HBase Metastore > Update HBase metastore hbase version to 1.1.1 > - > > Key: HIVE-11349 > URL: https://issues.apache.org/jira/browse/HIVE-11349 > Project: Hive > Issue Type: Task > Components: HBase Metastore, Metastore >Affects Versions: hbase-metastore-branch >Reporter: Alan Gates >Assignee: Alan Gates > Fix For: hbase-metastore-branch > > Attachments: HIVE-11349.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12167) HBase metastore causes massive number of ZK exceptions in MiniTez tests
[ https://issues.apache.org/jira/browse/HIVE-12167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Carol updated HIVE-12167: Component/s: HBase Metastore > HBase metastore causes massive number of ZK exceptions in MiniTez tests > --- > > Key: HIVE-12167 > URL: https://issues.apache.org/jira/browse/HIVE-12167 > Project: Hive > Issue Type: Bug > Components: HBase Metastore >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-12167.patch > > > I ran some random test (vectorization_10) with HBase metastore for unrelated > reason, and I see large number of exceptions in hive.log > {noformat} > $ grep -c "ConnectionLoss" hive.log > 52 > $ grep -c "Connection refused" hive.log > 1014 > {noformat} > These log lines' count has increased by ~33% since merging llap branch, but > it is still high before that (39/~700) for the same test). These lines are > not present if I disable HBase metastore. > The exceptions are: > {noformat} > 2015-10-13T17:51:06,232 WARN [Thread-359-SendThread(localhost:2181)]: > zookeeper.ClientCnxn (ClientCnxn.java:run(1102)) - Session 0x0 for server > null, unexpected error, closing socket connection and attempting reconnect > java.net.ConnectException: Connection refused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > ~[?:1.8.0_45] > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > ~[?:1.8.0_45] > at > org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) > ~[zookeeper-3.4.6.jar:3.4.6-1569965] > at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) > [zookeeper-3.4.6.jar:3.4.6-1569965] > {noformat} > that is retried for some seconds and then > {noformat} > 2015-10-13T17:51:22,867 WARN [Thread-359]: zookeeper.ZKUtil > (ZKUtil.java:checkExists(544)) - hconnection-0x1da6ef180x0, > quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode > (/hbase/hbaseid) > org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode > = ConnectionLoss for /hbase/hbaseid > at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) > ~[zookeeper-3.4.6.jar:3.4.6-1569965] > at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) > ~[zookeeper-3.4.6.jar:3.4.6-1569965] > at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) > ~[zookeeper-3.4.6.jar:3.4.6-1569965] > at > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222) > ~[hbase-client-1.1.1.jar:1.1.1] > at > org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541) > [hbase-client-1.1.1.jar:1.1.1] > at > org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) > [hbase-client-1.1.1.jar:1.1.1] > at > org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105) > [hbase-client-1.1.1.jar:1.1.1] > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879) > [hbase-client-1.1.1.jar:1.1.1] > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:635) > [hbase-client-1.1.1.jar:1.1.1] > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) ~[?:1.8.0_45] > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > [?:1.8.0_45] > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > [?:1.8.0_45] > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > [?:1.8.0_45] > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) > [hbase-client-1.1.1.jar:1.1.1] > at > org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:420) > [hbase-client-1.1.1.jar:1.1.1] > at > org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:329) > [hbase-client-1.1.1.jar:1.1.1] > at > org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144) > [hbase-client-1.1.1.jar:1.1.1] > at > org.apache.hadoop.hive.metastore.hbase.VanillaHBaseConnection.connect(VanillaHBaseConnection.java:56) > [hive-metastore-2.0.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.metastore.hbase.HBaseReadWrite.(HBaseReadWrite.java:227) > [hive-metastore-2.0.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.metastore.hbase.HBaseReadWrite.(HBaseReadWrite.java:83) > [hive-metastore-2.0.0-SNAPSHOT.jar:?] > at > org.apache.h
[jira] [Updated] (HIVE-11379) Bump Tephra version to 0.6.0
[ https://issues.apache.org/jira/browse/HIVE-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Carol updated HIVE-11379: Component/s: HBase Metastore > Bump Tephra version to 0.6.0 > > > Key: HIVE-11379 > URL: https://issues.apache.org/jira/browse/HIVE-11379 > Project: Hive > Issue Type: Task > Components: HBase Metastore, Metastore >Affects Versions: hbase-metastore-branch >Reporter: Alan Gates >Assignee: Alan Gates > Fix For: hbase-metastore-branch > > Attachments: HIVE-11379.patch > > > HIVE-11349 (which moved the HBase version to 1.1.1) moved Tephra support to > 0.5.1-SNAPSHOT because that was the only thing that supported HBase 1.0. > Since Tephra has now released a 0.6 that supports HBase 1.0 we should move to > it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)