[jira] [Commented] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874707#comment-16874707 ] Hive QA commented on HIVE-21225: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973126/HIVE-21225.5.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17785/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17785/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17785/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12973126/HIVE-21225.5.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12973126 - PreCommit-HIVE-Build > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21867) Sort semijoin conditions to accelerate query processing
[ https://issues.apache.org/jira/browse/HIVE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874706#comment-16874706 ] Hive QA commented on HIVE-21867: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973124/HIVE-21867.04.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 16357 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query65] (batchId=287) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testAddPartitionLocks (batchId=340) org.apache.hive.jdbc.TestSchedulerQueue.testQueueMappingCheckDisabled (batchId=280) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17784/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17784/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17784/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973124 - PreCommit-HIVE-Build > Sort semijoin conditions to accelerate query processing > --- > > Key: HIVE-21867 > URL: https://issues.apache.org/jira/browse/HIVE-21867 > Project: Hive > Issue Type: New Feature > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, > HIVE-21867.04.patch, HIVE-21867.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > The problem was tackled for CBO in HIVE-21857. Semijoin filters are > introduced later in the planning phase. Follow similar approach to sort them, > trying to accelerate filter evaluation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21867) Sort semijoin conditions to accelerate query processing
[ https://issues.apache.org/jira/browse/HIVE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874680#comment-16874680 ] Hive QA commented on HIVE-21867: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 3s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s{color} | {color:red} ql: The patch generated 1 new + 155 unchanged - 0 fixed = 156 total (was 155) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17784/dev-support/hive-personality.sh | | git revision | master / 57c4217 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17784/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17784/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Sort semijoin conditions to accelerate query processing > --- > > Key: HIVE-21867 > URL: https://issues.apache.org/jira/browse/HIVE-21867 > Project: Hive > Issue Type: New Feature > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, > HIVE-21867.04.patch, HIVE-21867.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > The problem was tackled for CBO in HIVE-21857. Semijoin filters are > introduced later in the planning phase. Follow similar approach to sort them, > trying to accelerate filter evaluation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21932) IndexOutOfRangeExeption in FileChksumIterator
[ https://issues.apache.org/jira/browse/HIVE-21932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874671#comment-16874671 ] Hive QA commented on HIVE-21932: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973122/HIVE-21932.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16357 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17783/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17783/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17783/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973122 - PreCommit-HIVE-Build > IndexOutOfRangeExeption in FileChksumIterator > - > > Key: HIVE-21932 > URL: https://issues.apache.org/jira/browse/HIVE-21932 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21932.01.patch > > > According to definition of {{InsertEventRequestData}} in > {{hive_metastore.thrift}} the {{filesAddedChecksum}} is a optional field. But > the FileChksumIterator does not handle it correctly when a client fires a > insert event which does not have file checksums. The issue is that > {{InsertEvent}} class initializes fileChecksums list to a empty arrayList to > the following check will never come into play > {noformat} > result = ReplChangeManager.encodeFileUri(files.get(i), chksums != null ? > chksums.get(i) : null, > subDirs != null ? subDirs.get(i) : null); > {noformat} > The chksums check above should include a {{!chksums.isEmpty()}} check as well > in the above line. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21932) IndexOutOfRangeExeption in FileChksumIterator
[ https://issues.apache.org/jira/browse/HIVE-21932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874632#comment-16874632 ] Hive QA commented on HIVE-21932: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 26s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} hcatalog/server-extensions: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17783/dev-support/hive-personality.sh | | git revision | master / 57c4217 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17783/yetus/diff-checkstyle-hcatalog_server-extensions.txt | | modules | C: hcatalog/server-extensions U: hcatalog/server-extensions | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17783/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > IndexOutOfRangeExeption in FileChksumIterator > - > > Key: HIVE-21932 > URL: https://issues.apache.org/jira/browse/HIVE-21932 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21932.01.patch > > > According to definition of {{InsertEventRequestData}} in > {{hive_metastore.thrift}} the {{filesAddedChecksum}} is a optional field. But > the FileChksumIterator does not handle it correctly when a client fires a > insert event which does not have file checksums. The issue is that > {{InsertEvent}} class initializes fileChecksums list to a empty arrayList to > the following check will never come into play > {noformat} > result = ReplChangeManager.encodeFileUri(files.get(i), chksums != null ? > chksums.get(i) : null, > subDirs != null ? subDirs.get(i) : null); > {noformat} > The chksums check above should include a {{!chksums.isEmpty()}} check as well > in the above line. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.12.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.2.patch, > HIVE-21637.3.patch, HIVE-21637.4.patch, HIVE-21637.5.patch, > HIVE-21637.6.patch, HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21925) HiveConnection retries should support backoff
[ https://issues.apache.org/jira/browse/HIVE-21925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874617#comment-16874617 ] Hive QA commented on HIVE-21925: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973118/HIVE-21925.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16357 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17782/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17782/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17782/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973118 - PreCommit-HIVE-Build > HiveConnection retries should support backoff > - > > Key: HIVE-21925 > URL: https://issues.apache.org/jira/browse/HIVE-21925 > Project: Hive > Issue Type: Bug > Components: Clients >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21925.01.patch, HIVE-21925.patch > > > Hive JDBC connection supports retries. In http mode, retries always seem to > happen immediately without any backoff. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21925) HiveConnection retries should support backoff
[ https://issues.apache.org/jira/browse/HIVE-21925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874599#comment-16874599 ] Hive QA commented on HIVE-21925: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} jdbc in master has 16 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} jdbc: The patch generated 1 new + 49 unchanged - 0 fixed = 50 total (was 49) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17782/dev-support/hive-personality.sh | | git revision | master / 57c4217 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17782/yetus/diff-checkstyle-jdbc.txt | | modules | C: jdbc U: jdbc | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17782/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HiveConnection retries should support backoff > - > > Key: HIVE-21925 > URL: https://issues.apache.org/jira/browse/HIVE-21925 > Project: Hive > Issue Type: Bug > Components: Clients >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21925.01.patch, HIVE-21925.patch > > > Hive JDBC connection supports retries. In http mode, retries always seem to > happen immediately without any backoff. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874596#comment-16874596 ] Hive QA commented on HIVE-21225: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973126/HIVE-21225.5.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17781/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17781/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17781/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2019-06-28 01:52:29.427 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-17781/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2019-06-28 01:52:29.431 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 57c4217 HIVE-15177: Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST (Oliver Draese, reviewed by Gopal V) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 57c4217 HIVE-15177: Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST (Oliver Draese, reviewed by Gopal V) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2019-06-28 01:52:30.881 + rm -rf ../yetus_PreCommit-HIVE-Build-17781 + mkdir ../yetus_PreCommit-HIVE-Build-17781 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-17781 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17781/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/io/HdfsUtils.java: does not exist in index error: a/ql/src/test/org/apache/hadoop/hive/ql/io/TestAcidUtils.java: does not exist in index Going to apply patch with: git apply -p1 /data/hiveptest/working/scratch/build.patch:10: trailing whitespace. /data/hiveptest/working/scratch/build.patch:46: trailing whitespace. /data/hiveptest/working/scratch/build.patch:94: trailing whitespace. // Okay, we're going to need these originals. /data/hiveptest/working/scratch/build.patch:109: trailing whitespace. /data/hiveptest/working/scratch/build.patch:124: trailing whitespace. } warning: squelched 21 whitespace errors warning: 26 lines add whitespace errors. + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: executing: [/tmp/protoc2011211136641073270.exe, --version] protoc-jar: executing: [/tmp/protoc2011211136641073270.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] libprotoc 2.5.0 ANTLR Parser Generator Version 3.5.2 protoc-jar: executing: [/tmp/protoc3865194964592695274.exe, --version] libprotoc 2.5.0 ANTLR Parser Generator Version
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874594#comment-16874594 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973114/HIVE-21637.11.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17780/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17780/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17780/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2019-06-28 01:50:42.707 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-17780/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2019-06-28 01:50:42.711 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 57c4217 HIVE-15177: Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST (Oliver Draese, reviewed by Gopal V) + git clean -f -d Removing ${project.basedir}/ Removing itests/${project.basedir}/ Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 57c4217 HIVE-15177: Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST (Oliver Draese, reviewed by Gopal V) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2019-06-28 01:50:43.889 + rm -rf ../yetus_PreCommit-HIVE-Build-17780 + mkdir ../yetus_PreCommit-HIVE-Build-17780 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-17780 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17780/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/beeline/pom.xml: does not exist in index error: a/hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java: does not exist in index error: a/hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/NotificationListener.java: does not exist in index error: a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/cache/TestCachedStoreUpdateUsingEvents.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java: does not exist in index error: a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/QueryState.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationPreEventListener.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUpdaterThread.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/MetaStoreCompactorThread.java: does not exist in index error: a/standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/a
[jira] [Commented] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
[ https://issues.apache.org/jira/browse/HIVE-21880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874593#comment-16874593 ] Hive QA commented on HIVE-21880: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973104/HIVE-21880.02.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 16328 tests executed *Failed tests:* {noformat} TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestObjectStore - did not produce a TEST-*.xml file (likely timed out) (batchId=232) org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17779/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17779/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17779/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973104 - PreCommit-HIVE-Build > Enable flaky test > TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites. > --- > > Key: HIVE-21880 > URL: https://issues.apache.org/jira/browse/HIVE-21880 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: Sankar Hariappan >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Need tp enable > TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites > which is disabled as it is flaky and randomly failing with below error. > {code} > Error Message > Notification events are missing in the meta store. > Stacktrace > java.lang.IllegalStateException: Notification events are missing in the meta > store. > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) > at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:282) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:265) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:289) > at
[jira] [Commented] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
[ https://issues.apache.org/jira/browse/HIVE-21880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874591#comment-16874591 ] Hive QA commented on HIVE-21880: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 5s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 34s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 11s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 59s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} itests/hcatalog-unit: The patch generated 81 new + 0 unchanged - 0 fixed = 81 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 32 unchanged - 0 fixed = 33 total (was 32) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17779/dev-support/hive-personality.sh | | git revision | master / 57c4217 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17779/yetus/diff-checkstyle-itests_hcatalog-unit.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17779/yetus/diff-checkstyle-itests_hive-unit.txt | | modules | C: standalone-metastore/metastore-common standalone-metastore/metastore-server ql hcatalog/server-extensions itests/hcatalog-unit itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17779/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Enable flaky test > TestRep
[jira] [Commented] (HIVE-21927) HiveServer Web UI: Setting the HttpOnly option in the cookies
[ https://issues.apache.org/jira/browse/HIVE-21927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874574#comment-16874574 ] Hive QA commented on HIVE-21927: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973106/HIVE-21927.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16357 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17778/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17778/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17778/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973106 - PreCommit-HIVE-Build > HiveServer Web UI: Setting the HttpOnly option in the cookies > - > > Key: HIVE-21927 > URL: https://issues.apache.org/jira/browse/HIVE-21927 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21927.01.patch, HIVE-21927.patch > > > Intend of this JIRA is to introduce the HttpOnly option in the cookie. > cookie: before change > {code:java} > hdp32bFALSE / FALSE 0 JSESSIONID > 8dkibwayfnrc4y4hvpu3vh74 > {code} > after change: > {code:java} > #HttpOnly_hdp32b FALSE / FALSE 0 JSESSIONID > e1npdkbo3inj1xnd6gdc6ihws > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-15177) Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST
[ https://issues.apache.org/jira/browse/HIVE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-15177: --- Fix Version/s: (was: 3.1.1) 4.0.0 > Authentication with hive fails when kerberos auth type is set to fromSubject > and principal contains _HOST > - > > Key: HIVE-15177 > URL: https://issues.apache.org/jira/browse/HIVE-15177 > Project: Hive > Issue Type: Bug > Components: Authentication >Reporter: Subrahmanya >Assignee: Oliver Draese >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-15177.1.patch, HIVE-15177.2.patch > > > Authentication with hive fails when kerberos auth type is set to fromSubject > and principal contains _HOST. > When auth type is set to fromSubject, _HOST in principal is not resolved to > the actual host name even though the correct host name is available. This > leads to connection failure. If auth type is not set to fromSubject host > resolution is done correctly. > The problem is in getKerberosTransport method of > org.apache.hive.service.auth.KerberosSaslHelper class. When assumeSubject is > true host name in the principal is not resolved. When it is false, host name > is passed on to HadoopThriftAuthBridge, which takes care of resolving the > parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-15177) Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST
[ https://issues.apache.org/jira/browse/HIVE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-15177: --- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to master (57c4217475856271233d66a7639fb70288d47a43) thanks, [~odraese]! > Authentication with hive fails when kerberos auth type is set to fromSubject > and principal contains _HOST > - > > Key: HIVE-15177 > URL: https://issues.apache.org/jira/browse/HIVE-15177 > Project: Hive > Issue Type: Bug > Components: Authentication >Reporter: Subrahmanya >Assignee: Oliver Draese >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-15177.1.patch, HIVE-15177.2.patch > > > Authentication with hive fails when kerberos auth type is set to fromSubject > and principal contains _HOST. > When auth type is set to fromSubject, _HOST in principal is not resolved to > the actual host name even though the correct host name is available. This > leads to connection failure. If auth type is not set to fromSubject host > resolution is done correctly. > The problem is in getKerberosTransport method of > org.apache.hive.service.auth.KerberosSaslHelper class. When assumeSubject is > true host name in the principal is not resolved. When it is false, host name > is passed on to HadoopThriftAuthBridge, which takes care of resolving the > parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21927) HiveServer Web UI: Setting the HttpOnly option in the cookies
[ https://issues.apache.org/jira/browse/HIVE-21927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874562#comment-16874562 ] Hive QA commented on HIVE-21927: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17778/dev-support/hive-personality.sh | | git revision | master / e000e2f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: common U: common | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17778/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HiveServer Web UI: Setting the HttpOnly option in the cookies > - > > Key: HIVE-21927 > URL: https://issues.apache.org/jira/browse/HIVE-21927 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21927.01.patch, HIVE-21927.patch > > > Intend of this JIRA is to introduce the HttpOnly option in the cookie. > cookie: before change > {code:java} > hdp32bFALSE / FALSE 0 JSESSIONID > 8dkibwayfnrc4y4hvpu3vh74 > {code} > after change: > {code:java} > #HttpOnly_hdp32b FALSE / FALSE 0 JSESSIONID > e1npdkbo3inj1xnd6gdc6ihws > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21886) REPL - With table list - Handle rename events during replace policy
[ https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874560#comment-16874560 ] Hive QA commented on HIVE-21886: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973103/HIVE-21886.04.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16327 tests executed *Failed tests:* {noformat} TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestObjectStore - did not produce a TEST-*.xml file (likely timed out) (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973103 - PreCommit-HIVE-Build > REPL - With table list - Handle rename events during replace policy > --- > > Key: HIVE-21886 > URL: https://issues.apache.org/jira/browse/HIVE-21886 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21886.01.patch, HIVE-21886.02.patch, > HIVE-21886.03.patch, HIVE-21886.04.patch, HIVE-21886.04.patch > > Time Spent: 11h 10m > Remaining Estimate: 0h > > If some rename events are found to be dumped and replayed while replace > policy is getting executed, it needs to take care of the policy inclusion in > both the policy for each table name. > 1. Create a list of tables to be bootstrapped. > 2. During handling of alter table, if the alter type is rename > 1. If the old table name is present in the list of table to be > bootstrapped, remove it. > 2. If the new table name, matches the new policy, add it to the list > of tables to be bootstrapped. > 3. If the old table does not match the old policy drop it, even if the > table is not present at target. > 3. During handling of drop table > 1. if the table is in the list of tables to be bootstrapped, then > remove it and ignore the event. > 4. During other event handling > 1. if the table is there in the list of tables to be bootstrapped, > then ignore the event. > 2. If the new policy does not match the table name, then ignore the > event. > > Rename handling during replace policy > # Old name not matching old policy – The old table will not be there at the > target cluster. The table will not be returned by get-all-table. > ## Old name is not matching new policy > ### New name not matching old policy > New name not matching new policy > * Ignore the event, no need to do anything. > New name matching new policy > * The table will be returned by get-all-table. Replace policy handler > will bootstrap this table as its matching new policy and not matching old > policy. > * All the future events will be ignored as part of check added by > replace policy handling. > * All the event with old table name will anyways be ignored as the old > name is not matching the new policy. > ### New name matching old policy > New name not matching new policy > * As the new name is not matching the new policy, the table need not be > replicated. > * As the old name is not matching the new policy, the rename events will > be ignored. > * So nothing to be done for this scenario. > New name matching new policy > * As the new name is matching both old and new policy, replace handler > will not bootstrap the table. > * Add the table to the list of tables to be bootstrapped. > * Ignore all the events with new name. > * If there is a drop event for the table (with new name), then remove > the table from the the list of table to be bootstrapped. > * In case of rename event (double rename) > ** If the new name satisfies the table pattern, then add the new name to > the list of tables to be bootstrapped and remove the old name from the list > of tables to be bootstrapped. > ** If the new name does not satisfies then just removed the table name > from the list o
[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-21225: Attachment: HIVE-21225.5.patch > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-21225: Attachment: (was: HIVE-21225.5.patch) > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21886) REPL - With table list - Handle rename events during replace policy
[ https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874541#comment-16874541 ] Hive QA commented on HIVE-21886: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 3s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-1/dev-support/hive-personality.sh | | git revision | master / e000e2f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-1/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > REPL - With table list - Handle rename events during replace policy > --- > > Key: HIVE-21886 > URL: https://issues.apache.org/jira/browse/HIVE-21886 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21886.01.patch, HIVE-21886.02.patch, > HIVE-21886.03.patch, HIVE-21886.04.patch, HIVE-21886.04.patch > > Time Spent: 11h 10m > Remaining Estimate: 0h > > If some rename events are found to be dumped and replayed while replace > policy is getting executed, it needs to take care of the policy inclusion in > both the policy for each table name. > 1. Create a list of tables to be bootstrapped. > 2. During handling of alter table, if the alter type is rename > 1. If the old table name is present in the list of table to be > bootstrapped, remove it. > 2. If the new table name, matches the new policy, add it to the list > of tables to be bootstrapped. >
[jira] [Commented] (HIVE-21925) HiveConnection retries should support backoff
[ https://issues.apache.org/jira/browse/HIVE-21925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874528#comment-16874528 ] Prasanth Jayachandran commented on HIVE-21925: -- +1 > HiveConnection retries should support backoff > - > > Key: HIVE-21925 > URL: https://issues.apache.org/jira/browse/HIVE-21925 > Project: Hive > Issue Type: Bug > Components: Clients >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21925.01.patch, HIVE-21925.patch > > > Hive JDBC connection supports retries. In http mode, retries always seem to > happen immediately without any backoff. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18735) Create table like loses transactional attribute
[ https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874526#comment-16874526 ] Hive QA commented on HIVE-18735: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973083/HIVE-18735.06.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16357 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17776/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17776/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17776/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973083 - PreCommit-HIVE-Build > Create table like loses transactional attribute > --- > > Key: HIVE-18735 > URL: https://issues.apache.org/jira/browse/HIVE-18735 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Eugene Koifman >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch, > HIVE-18735.03.patch, HIVE-18735.04.patch, HIVE-18735.05.patch, > HIVE-18735.06.patch > > > {noformat} > create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc > TBLPROPERTIES ('transactional'='true')"; > create table T like T1; > show create table T ; > CREATE TABLE `T`( > `a` int, > `b` int) > CLUSTERED BY ( > a) > INTO 2 BUCKETS > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > > 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518813536099/warehouse/t' > TBLPROPERTIES ( > 'transient_lastDdlTime'='1518813564') > {noformat} > Specifying props explicitly does work > {noformat} > create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc > TBLPROPERTIES ('transactional'='true')"; > create table T like T1 TBLPROPERTIES ('transactional'='true'); > show create table T ; > CREATE TABLE `T`( > `a` int, > `b` int) > CLUSTERED BY ( > a) > INTO 2 BUCKETS > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > > 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518814098564/warehouse/t' > TBLPROPERTIES ( > 'transactional'='true', > 'transactional_properties'='default', > 'transient_lastDdlTime'='1518814111') > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-21928: --- Status: Open (was: Patch Available) > Fix for statistics annotation in nested AND expressions > --- > > Key: HIVE-21928 > URL: https://issues.apache.org/jira/browse/HIVE-21928 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Attachments: HIVE-21928.patch > > > Discovered while working on HIVE-21867. Having predicates with nested AND > expressions may result in different stats, even if predicates are basically > similar (from stats estimation standpoint). > For instance, stats for {{AND(x=5, true, true)}} are different from {{x=5}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21867) Sort semijoin conditions to accelerate query processing
[ https://issues.apache.org/jira/browse/HIVE-21867?focusedWorklogId=268842&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268842 ] ASF GitHub Bot logged work on HIVE-21867: - Author: ASF GitHub Bot Created on: 27/Jun/19 21:19 Start Date: 27/Jun/19 21:19 Worklog Time Spent: 10m Work Description: jcamachor commented on issue #687: HIVE-21867 URL: https://github.com/apache/hive/pull/687#issuecomment-506514522 @vineetgarg02 , I updated the PR. Note that ```hybridgrace_hashjoin_2.q``` issue will be tackled in https://issues.apache.org/jira/browse/HIVE-21928 so I think we can proceed with this issue. In the latest patch I also added some simple logic to ```SharedWorkOptimizer``` to remove some duplicate filter expressions remaining in the plan because of change in the shape of the expression (problem was existing, patch just exposed it). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268842) Time Spent: 1.5h (was: 1h 20m) > Sort semijoin conditions to accelerate query processing > --- > > Key: HIVE-21867 > URL: https://issues.apache.org/jira/browse/HIVE-21867 > Project: Hive > Issue Type: New Feature > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, > HIVE-21867.04.patch, HIVE-21867.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > The problem was tackled for CBO in HIVE-21857. Semijoin filters are > introduced later in the planning phase. Follow similar approach to sort them, > trying to accelerate filter evaluation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21867) Sort semijoin conditions to accelerate query processing
[ https://issues.apache.org/jira/browse/HIVE-21867?focusedWorklogId=268841&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268841 ] ASF GitHub Bot logged work on HIVE-21867: - Author: ASF GitHub Bot Created on: 27/Jun/19 21:19 Start Date: 27/Jun/19 21:19 Worklog Time Spent: 10m Work Description: jcamachor commented on issue #687: HIVE-21867 URL: https://github.com/apache/hive/pull/687#issuecomment-506514522 @vineetgarg02 , I updated the PR. Note that ```hybridgrace_hashjoin_2.q``` issue will be tackled in HIVE-21928 so I think we can proceed with this issue. In the latest patch I also added some simple logic to ```SharedWorkOptimizer``` to remove some duplicate filter expressions remaining in the plan because of change in the shape of the expression (problem was existing, patch just exposed it). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268841) Time Spent: 1h 20m (was: 1h 10m) > Sort semijoin conditions to accelerate query processing > --- > > Key: HIVE-21867 > URL: https://issues.apache.org/jira/browse/HIVE-21867 > Project: Hive > Issue Type: New Feature > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, > HIVE-21867.04.patch, HIVE-21867.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > The problem was tackled for CBO in HIVE-21857. Semijoin filters are > introduced later in the planning phase. Follow similar approach to sort them, > trying to accelerate filter evaluation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21867) Sort semijoin conditions to accelerate query processing
[ https://issues.apache.org/jira/browse/HIVE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-21867: --- Attachment: HIVE-21867.04.patch > Sort semijoin conditions to accelerate query processing > --- > > Key: HIVE-21867 > URL: https://issues.apache.org/jira/browse/HIVE-21867 > Project: Hive > Issue Type: New Feature > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, > HIVE-21867.04.patch, HIVE-21867.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The problem was tackled for CBO in HIVE-21857. Semijoin filters are > introduced later in the planning phase. Follow similar approach to sort them, > trying to accelerate filter evaluation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21867) Sort semijoin conditions to accelerate query processing
[ https://issues.apache.org/jira/browse/HIVE-21867?focusedWorklogId=268837&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268837 ] ASF GitHub Bot logged work on HIVE-21867: - Author: ASF GitHub Bot Created on: 27/Jun/19 21:15 Start Date: 27/Jun/19 21:15 Worklog Time Spent: 10m Work Description: jcamachor commented on issue #687: HIVE-21867 URL: https://github.com/apache/hive/pull/687#issuecomment-506514522 @vineetgarg02 , I updated the PR. Note that ```hybridgrace_hashjoin_2.q``` issue will be tackled in HIVE-20260 so I think we can proceed with this issue. In the latest patch I also added some simple logic to ```SharedWorkOptimizer``` to remove some duplicate filter expressions remaining in the plan because of change in the shape of the expression (problem was existing, patch just exposed it). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268837) Time Spent: 1h 10m (was: 1h) > Sort semijoin conditions to accelerate query processing > --- > > Key: HIVE-21867 > URL: https://issues.apache.org/jira/browse/HIVE-21867 > Project: Hive > Issue Type: New Feature > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, > HIVE-21867.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The problem was tackled for CBO in HIVE-21857. Semijoin filters are > introduced later in the planning phase. Follow similar approach to sort them, > trying to accelerate filter evaluation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21932) IndexOutOfRangeExeption in FileChksumIterator
[ https://issues.apache.org/jira/browse/HIVE-21932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874507#comment-16874507 ] Vihang Karajgaonkar commented on HIVE-21932: [~thejas] [~anishek] Can you please take a look? Its a simple patch. > IndexOutOfRangeExeption in FileChksumIterator > - > > Key: HIVE-21932 > URL: https://issues.apache.org/jira/browse/HIVE-21932 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21932.01.patch > > > According to definition of {{InsertEventRequestData}} in > {{hive_metastore.thrift}} the {{filesAddedChecksum}} is a optional field. But > the FileChksumIterator does not handle it correctly when a client fires a > insert event which does not have file checksums. The issue is that > {{InsertEvent}} class initializes fileChecksums list to a empty arrayList to > the following check will never come into play > {noformat} > result = ReplChangeManager.encodeFileUri(files.get(i), chksums != null ? > chksums.get(i) : null, > subDirs != null ? subDirs.get(i) : null); > {noformat} > The chksums check above should include a {{!chksums.isEmpty()}} check as well > in the above line. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21932) IndexOutOfRangeExeption in FileChksumIterator
[ https://issues.apache.org/jira/browse/HIVE-21932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-21932: --- Status: Patch Available (was: Open) > IndexOutOfRangeExeption in FileChksumIterator > - > > Key: HIVE-21932 > URL: https://issues.apache.org/jira/browse/HIVE-21932 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21932.01.patch > > > According to definition of {{InsertEventRequestData}} in > {{hive_metastore.thrift}} the {{filesAddedChecksum}} is a optional field. But > the FileChksumIterator does not handle it correctly when a client fires a > insert event which does not have file checksums. The issue is that > {{InsertEvent}} class initializes fileChecksums list to a empty arrayList to > the following check will never come into play > {noformat} > result = ReplChangeManager.encodeFileUri(files.get(i), chksums != null ? > chksums.get(i) : null, > subDirs != null ? subDirs.get(i) : null); > {noformat} > The chksums check above should include a {{!chksums.isEmpty()}} check as well > in the above line. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21932) IndexOutOfRangeExeption in FileChksumIterator
[ https://issues.apache.org/jira/browse/HIVE-21932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-21932: --- Attachment: HIVE-21932.01.patch > IndexOutOfRangeExeption in FileChksumIterator > - > > Key: HIVE-21932 > URL: https://issues.apache.org/jira/browse/HIVE-21932 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21932.01.patch > > > According to definition of {{InsertEventRequestData}} in > {{hive_metastore.thrift}} the {{filesAddedChecksum}} is a optional field. But > the FileChksumIterator does not handle it correctly when a client fires a > insert event which does not have file checksums. The issue is that > {{InsertEvent}} class initializes fileChecksums list to a empty arrayList to > the following check will never come into play > {noformat} > result = ReplChangeManager.encodeFileUri(files.get(i), chksums != null ? > chksums.get(i) : null, > subDirs != null ? subDirs.get(i) : null); > {noformat} > The chksums check above should include a {{!chksums.isEmpty()}} check as well > in the above line. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18735) Create table like loses transactional attribute
[ https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874502#comment-16874502 ] Hive QA commented on HIVE-18735: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 1s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17776/dev-support/hive-personality.sh | | git revision | master / e000e2f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql hbase-handler U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17776/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create table like loses transactional attribute > --- > > Key: HIVE-18735 > URL: https://issues.apache.org/jira/browse/HIVE-18735 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Eugene Koifman >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch, > HIVE-18735.03.patch, HIVE-18735.04.patch, HIVE-18735.05.patch, > HIVE-18735.06.patch > > > {noformat} > create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc > TBLPROPERTIES ('transactional'='true')"; > create table T like T1; > show create table T ; > CREATE TABLE `T`( > `a` int, > `b` int) > CLUSTERED BY ( > a) > INTO 2 BUCKETS > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > > 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518813536099/warehouse/t' > TBLPROPERTIES (
[jira] [Commented] (HIVE-21925) HiveConnection retries should support backoff
[ https://issues.apache.org/jira/browse/HIVE-21925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874496#comment-16874496 ] Rajkumar Singh commented on HIVE-21925: --- Thanks [~prasanth_j] for review, have uploaded the updated patch as per your suggestion. please have a look. > HiveConnection retries should support backoff > - > > Key: HIVE-21925 > URL: https://issues.apache.org/jira/browse/HIVE-21925 > Project: Hive > Issue Type: Bug > Components: Clients >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21925.01.patch, HIVE-21925.patch > > > Hive JDBC connection supports retries. In http mode, retries always seem to > happen immediately without any backoff. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21925) HiveConnection retries should support backoff
[ https://issues.apache.org/jira/browse/HIVE-21925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-21925: -- Status: Open (was: Patch Available) > HiveConnection retries should support backoff > - > > Key: HIVE-21925 > URL: https://issues.apache.org/jira/browse/HIVE-21925 > Project: Hive > Issue Type: Bug > Components: Clients >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21925.01.patch, HIVE-21925.patch > > > Hive JDBC connection supports retries. In http mode, retries always seem to > happen immediately without any backoff. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21925) HiveConnection retries should support backoff
[ https://issues.apache.org/jira/browse/HIVE-21925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-21925: -- Attachment: HIVE-21925.01.patch Status: Patch Available (was: Open) > HiveConnection retries should support backoff > - > > Key: HIVE-21925 > URL: https://issues.apache.org/jira/browse/HIVE-21925 > Project: Hive > Issue Type: Bug > Components: Clients >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21925.01.patch, HIVE-21925.patch > > > Hive JDBC connection supports retries. In http mode, retries always seem to > happen immediately without any backoff. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21932) IndexOutOfRangeExeption in FileChksumIterator
[ https://issues.apache.org/jira/browse/HIVE-21932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar reassigned HIVE-21932: -- > IndexOutOfRangeExeption in FileChksumIterator > - > > Key: HIVE-21932 > URL: https://issues.apache.org/jira/browse/HIVE-21932 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > > According to definition of {{InsertEventRequestData}} in > {{hive_metastore.thrift}} the {{filesAddedChecksum}} is a optional field. But > the FileChksumIterator does not handle it correctly when a client fires a > insert event which does not have file checksums. The issue is that > {{InsertEvent}} class initializes fileChecksums list to a empty arrayList to > the following check will never come into play > {noformat} > result = ReplChangeManager.encodeFileUri(files.get(i), chksums != null ? > chksums.get(i) : null, > subDirs != null ? subDirs.get(i) : null); > {noformat} > The chksums check above should include a {{!chksums.isEmpty()}} check as well > in the above line. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-21225: Attachment: HIVE-21225.5.patch > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874482#comment-16874482 ] Hive QA commented on HIVE-21911: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973081/HIVE-21911.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16359 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17775/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17775/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17775/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973081 - PreCommit-HIVE-Build > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 40m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21925) HiveConnection retries should support backoff
[ https://issues.apache.org/jira/browse/HIVE-21925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874469#comment-16874469 ] Prasanth Jayachandran commented on HIVE-21925: -- sleepMs. can that be made as a parameter as well? May be add another jdbc param "retryInterval" and default it to 1s > HiveConnection retries should support backoff > - > > Key: HIVE-21925 > URL: https://issues.apache.org/jira/browse/HIVE-21925 > Project: Hive > Issue Type: Bug > Components: Clients >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21925.patch > > > Hive JDBC connection supports retries. In http mode, retries always seem to > happen immediately without any backoff. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874440#comment-16874440 ] Hive QA commented on HIVE-21911: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 34s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} llap-tez in master has 17 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} common: The patch generated 2 new + 438 unchanged - 0 fixed = 440 total (was 438) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} llap-tez: The patch generated 5 new + 76 unchanged - 0 fixed = 81 total (was 76) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17775/dev-support/hive-personality.sh | | git revision | master / e000e2f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17775/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17775/yetus/diff-checkstyle-llap-tez.txt | | modules | C: common llap-tez U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17775/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 40m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health da
[jira] [Commented] (HIVE-21115) Add support for object versions in metastore
[ https://issues.apache.org/jira/browse/HIVE-21115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874431#comment-16874431 ] Vihang Karajgaonkar commented on HIVE-21115: Don't think this is being worked on anymore. Resolving this as wont fix. > Add support for object versions in metastore > > > Key: HIVE-21115 > URL: https://issues.apache.org/jira/browse/HIVE-21115 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-21115.1.patch, HIVE-21115.2.patch > > > Currently, metastore objects are identified uniquely by their names (eg. > catName, dbName and tblName for a table is unique). Once a table or partition > is created it could be altered in many ways. There is no good way currently > to identify the version of the object once it is altered. For example, > suppose there are two clients (Hive and Impala) using the same metastore. > Once some alter operations are performed by a client, another client which > wants to do a alter operation has no good way to know if the object which it > has is the same as the one stored in metastore. Metastore updates the > {{transient_lastDdlTime}} every time there is a DDL operation on the object. > However, this value cannot be relied for all the clients since after > HIVE-1768 metastore updates the value only when it is not set in the > parameters. It is possible that a client which alters the object state, does > not remove the {{transient_lastDdlTime}} and metastore will not update it. > Secondly, if there is a clock skew between multiple HMS instances when HMS-HA > is configured, time values cannot be relied on to find out the sequence of > alter operations on a given object. > This JIRA propose to use JDO versioning support by Datanucleus > http://www.datanucleus.org/products/accessplatform_4_2/jdo/versioning.html to > generate a incrementing sequence number every time a object is altered. The > value of this object can be set as one of the values in the parameters. The > advantage of using Datanucleus the versioning can be done across HMS > instances as part of the database transaction and it should work for all the > supported databases. > In theory such a version can be used to detect if the client is presenting a > object which is "stale" when issuing a alter request. Metastore can choose to > reject such a alter request since the client may be caching a old version of > the object and any alter operation on such stale object can potentially > overwrite previous operations. However, this is can be done in a separate > JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21115) Add support for object versions in metastore
[ https://issues.apache.org/jira/browse/HIVE-21115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-21115: --- Assignee: Vihang Karajgaonkar (was: Bharathkrishna Guruvayoor Murali) Status: Open (was: Patch Available) > Add support for object versions in metastore > > > Key: HIVE-21115 > URL: https://issues.apache.org/jira/browse/HIVE-21115 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21115.1.patch, HIVE-21115.2.patch > > > Currently, metastore objects are identified uniquely by their names (eg. > catName, dbName and tblName for a table is unique). Once a table or partition > is created it could be altered in many ways. There is no good way currently > to identify the version of the object once it is altered. For example, > suppose there are two clients (Hive and Impala) using the same metastore. > Once some alter operations are performed by a client, another client which > wants to do a alter operation has no good way to know if the object which it > has is the same as the one stored in metastore. Metastore updates the > {{transient_lastDdlTime}} every time there is a DDL operation on the object. > However, this value cannot be relied for all the clients since after > HIVE-1768 metastore updates the value only when it is not set in the > parameters. It is possible that a client which alters the object state, does > not remove the {{transient_lastDdlTime}} and metastore will not update it. > Secondly, if there is a clock skew between multiple HMS instances when HMS-HA > is configured, time values cannot be relied on to find out the sequence of > alter operations on a given object. > This JIRA propose to use JDO versioning support by Datanucleus > http://www.datanucleus.org/products/accessplatform_4_2/jdo/versioning.html to > generate a incrementing sequence number every time a object is altered. The > value of this object can be set as one of the values in the parameters. The > advantage of using Datanucleus the versioning can be done across HMS > instances as part of the database transaction and it should work for all the > supported databases. > In theory such a version can be used to detect if the client is presenting a > object which is "stale" when issuing a alter request. Metastore can choose to > reject such a alter request since the client may be caching a old version of > the object and any alter operation on such stale object can potentially > overwrite previous operations. However, this is can be done in a separate > JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-21115) Add support for object versions in metastore
[ https://issues.apache.org/jira/browse/HIVE-21115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-21115 started by Vihang Karajgaonkar. -- > Add support for object versions in metastore > > > Key: HIVE-21115 > URL: https://issues.apache.org/jira/browse/HIVE-21115 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21115.1.patch, HIVE-21115.2.patch > > > Currently, metastore objects are identified uniquely by their names (eg. > catName, dbName and tblName for a table is unique). Once a table or partition > is created it could be altered in many ways. There is no good way currently > to identify the version of the object once it is altered. For example, > suppose there are two clients (Hive and Impala) using the same metastore. > Once some alter operations are performed by a client, another client which > wants to do a alter operation has no good way to know if the object which it > has is the same as the one stored in metastore. Metastore updates the > {{transient_lastDdlTime}} every time there is a DDL operation on the object. > However, this value cannot be relied for all the clients since after > HIVE-1768 metastore updates the value only when it is not set in the > parameters. It is possible that a client which alters the object state, does > not remove the {{transient_lastDdlTime}} and metastore will not update it. > Secondly, if there is a clock skew between multiple HMS instances when HMS-HA > is configured, time values cannot be relied on to find out the sequence of > alter operations on a given object. > This JIRA propose to use JDO versioning support by Datanucleus > http://www.datanucleus.org/products/accessplatform_4_2/jdo/versioning.html to > generate a incrementing sequence number every time a object is altered. The > value of this object can be set as one of the values in the parameters. The > advantage of using Datanucleus the versioning can be done across HMS > instances as part of the database transaction and it should work for all the > supported databases. > In theory such a version can be used to detect if the client is presenting a > object which is "stale" when issuing a alter request. Metastore can choose to > reject such a alter request since the client may be caching a old version of > the object and any alter operation on such stale object can potentially > overwrite previous operations. However, this is can be done in a separate > JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-21115) Add support for object versions in metastore
[ https://issues.apache.org/jira/browse/HIVE-21115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar resolved HIVE-21115. Resolution: Won't Fix > Add support for object versions in metastore > > > Key: HIVE-21115 > URL: https://issues.apache.org/jira/browse/HIVE-21115 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21115.1.patch, HIVE-21115.2.patch > > > Currently, metastore objects are identified uniquely by their names (eg. > catName, dbName and tblName for a table is unique). Once a table or partition > is created it could be altered in many ways. There is no good way currently > to identify the version of the object once it is altered. For example, > suppose there are two clients (Hive and Impala) using the same metastore. > Once some alter operations are performed by a client, another client which > wants to do a alter operation has no good way to know if the object which it > has is the same as the one stored in metastore. Metastore updates the > {{transient_lastDdlTime}} every time there is a DDL operation on the object. > However, this value cannot be relied for all the clients since after > HIVE-1768 metastore updates the value only when it is not set in the > parameters. It is possible that a client which alters the object state, does > not remove the {{transient_lastDdlTime}} and metastore will not update it. > Secondly, if there is a clock skew between multiple HMS instances when HMS-HA > is configured, time values cannot be relied on to find out the sequence of > alter operations on a given object. > This JIRA propose to use JDO versioning support by Datanucleus > http://www.datanucleus.org/products/accessplatform_4_2/jdo/versioning.html to > generate a incrementing sequence number every time a object is altered. The > value of this object can be set as one of the values in the parameters. The > advantage of using Datanucleus the versioning can be done across HMS > instances as part of the database transaction and it should work for all the > supported databases. > In theory such a version can be used to detect if the client is presenting a > object which is "stale" when issuing a alter request. Metastore can choose to > reject such a alter request since the client may be caching a old version of > the object and any alter operation on such stale object can potentially > overwrite previous operations. However, this is can be done in a separate > JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-21128) hive.version.shortname should be 3.2 on branch-3
[ https://issues.apache.org/jira/browse/HIVE-21128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar resolved HIVE-21128. Resolution: Won't Fix > hive.version.shortname should be 3.2 on branch-3 > > > Key: HIVE-21128 > URL: https://issues.apache.org/jira/browse/HIVE-21128 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21128.01.branch-3.patch, > HIVE-21128.02.branch-3.patch, HIVE-21128.03.branch-3.patch > > > Since 3.1.0 is already release, the {{hive.version.shortname}} property in > the pom.xml of standalone-metastore should be 3.2.0. This version shortname > is used to generate the metastore schema version and used by Schematool to > initialize the schema using the correct script. Currently it using 3.1.0 > schema init script instead of 3.2.0 init script -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21128) hive.version.shortname should be 3.2 on branch-3
[ https://issues.apache.org/jira/browse/HIVE-21128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-21128: --- Status: Open (was: Patch Available) > hive.version.shortname should be 3.2 on branch-3 > > > Key: HIVE-21128 > URL: https://issues.apache.org/jira/browse/HIVE-21128 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21128.01.branch-3.patch, > HIVE-21128.02.branch-3.patch, HIVE-21128.03.branch-3.patch > > > Since 3.1.0 is already release, the {{hive.version.shortname}} property in > the pom.xml of standalone-metastore should be 3.2.0. This version shortname > is used to generate the metastore schema version and used by Schematool to > initialize the schema using the correct script. Currently it using 3.1.0 > schema init script instead of 3.2.0 init script -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21131) Document some of the static util methods in MetastoreUtils
[ https://issues.apache.org/jira/browse/HIVE-21131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar reassigned HIVE-21131: -- Assignee: (was: Vihang Karajgaonkar) > Document some of the static util methods in MetastoreUtils > -- > > Key: HIVE-21131 > URL: https://issues.apache.org/jira/browse/HIVE-21131 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Priority: Trivial > > {{MetastoreUtils}} has some methods like {{makePartNameMatcher}} which could > use some javadoc -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21128) hive.version.shortname should be 3.2 on branch-3
[ https://issues.apache.org/jira/browse/HIVE-21128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874430#comment-16874430 ] Vihang Karajgaonkar commented on HIVE-21128: 3.2.0 is already released so this JIRA can be resolved now. The branch-3 patch is not needed anymore > hive.version.shortname should be 3.2 on branch-3 > > > Key: HIVE-21128 > URL: https://issues.apache.org/jira/browse/HIVE-21128 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-21128.01.branch-3.patch, > HIVE-21128.02.branch-3.patch, HIVE-21128.03.branch-3.patch > > > Since 3.1.0 is already release, the {{hive.version.shortname}} property in > the pom.xml of standalone-metastore should be 3.2.0. This version shortname > is used to generate the metastore schema version and used by Schematool to > initialize the schema using the correct script. Currently it using 3.1.0 > schema init script instead of 3.2.0 init script -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21595) HIVE-20556 breaks backwards compatibility
[ https://issues.apache.org/jira/browse/HIVE-21595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar reassigned HIVE-21595: -- Assignee: (was: Vihang Karajgaonkar) > HIVE-20556 breaks backwards compatibility > - > > Key: HIVE-21595 > URL: https://issues.apache.org/jira/browse/HIVE-21595 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Priority: Blocker > > HIVE-20556 exposes a new field Table definition. However, it changes the > order of the field ids which breaks backwards wire-compatibility. Any older > client which is connects with HMS will not be able to deserialize table > objects correctly since the field ids are different on client and server side. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.11.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.2.patch, HIVE-21637.3.patch, > HIVE-21637.4.patch, HIVE-21637.5.patch, HIVE-21637.6.patch, > HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874418#comment-16874418 ] Hive QA commented on HIVE-21910: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973059/HIVE-21910.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16359 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityMultiplePreemptionsSameHost2 (batchId=293) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17774/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17774/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17774/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973059 - PreCommit-HIVE-Build > Multiple target location generation in HostAffinitySplitLocationProvider > > > Key: HIVE-21910 > URL: https://issues.apache.org/jira/browse/HIVE-21910 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21910.patch > > Time Spent: 50m > Remaining Estimate: 0h > > We need to generate multiple target locations by > HostAffinitySplitLocationProvider, so we will have deterministic fallback > nodes in case the target node is disabled -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?focusedWorklogId=268723&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268723 ] ASF GitHub Bot logged work on HIVE-21911: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:30 Start Date: 27/Jun/19 18:30 Worklog Time Spent: 10m Work Description: odraese commented on pull request #691: HIVE-21911: Pluggable LlapMetricsListener on Tez side to disable / resize Daemons URL: https://github.com/apache/hive/pull/691#discussion_r298311447 ## File path: llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/metrics/LlapMetricsCollector.java ## @@ -58,26 +61,44 @@ private final Map llapClients; private final Map instanceStatisticsMap; private final long metricsCollectionMs; + @VisibleForTesting + final LlapMetricsListener listener; Review comment: Would it potentially make sense to have multiple listeners? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268723) Time Spent: 20m (was: 10m) > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 20m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?focusedWorklogId=268724&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268724 ] ASF GitHub Bot logged work on HIVE-21911: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:30 Start Date: 27/Jun/19 18:30 Worklog Time Spent: 10m Work Description: odraese commented on pull request #691: HIVE-21911: Pluggable LlapMetricsListener on Tez side to disable / resize Daemons URL: https://github.com/apache/hive/pull/691#discussion_r298309113 ## File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java ## @@ -4353,6 +4353,9 @@ private static void populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal new TimeValidator(TimeUnit.MILLISECONDS), "Collect llap daemon metrics in the AM every given milliseconds,\n" + "so that the AM can use this information, to make better scheduling decisions.\n" + "If it's set to 0, then the feature is disabled."), + LLAP_TASK_SCHEDULER_AM_COLLECT_DAEMON_METRICS_LISTENER("hive.llap.task.scheduler.am.collect.daemon.metrics.listener", "", Review comment: Would remove the reference to AM here as we will change that implementation detail in the next stage. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268724) Time Spent: 0.5h (was: 20m) > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?focusedWorklogId=268722&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268722 ] ASF GitHub Bot logged work on HIVE-21911: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:30 Start Date: 27/Jun/19 18:30 Worklog Time Spent: 10m Work Description: odraese commented on pull request #691: HIVE-21911: Pluggable LlapMetricsListener on Tez side to disable / resize Daemons URL: https://github.com/apache/hive/pull/691#discussion_r298311718 ## File path: llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/metrics/LlapMetricsCollector.java ## @@ -101,13 +122,27 @@ void collectMetrics() { LlapDaemonProtocolProtos.GetDaemonMetricsResponseProto metrics = client.getDaemonMetrics(null, LlapDaemonProtocolProtos.GetDaemonMetricsRequestProto.newBuilder().build()); -instanceStatisticsMap.put(identity, new LlapMetrics(metrics)); - +LlapMetrics newMetrics = new LlapMetrics(metrics); Review comment: Why would we start the receiving of stats, if we wouldn't have a listener? Should we maybe skip this if there is no listener? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268722) Time Spent: 20m (was: 10m) > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 20m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?focusedWorklogId=268725&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268725 ] ASF GitHub Bot logged work on HIVE-21911: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:30 Start Date: 27/Jun/19 18:30 Worklog Time Spent: 10m Work Description: odraese commented on pull request #691: HIVE-21911: Pluggable LlapMetricsListener on Tez side to disable / resize Daemons URL: https://github.com/apache/hive/pull/691#discussion_r298310874 ## File path: llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/metrics/LlapMetricsCollector.java ## @@ -58,26 +61,44 @@ private final Map llapClients; private final Map instanceStatisticsMap; private final long metricsCollectionMs; + @VisibleForTesting + final LlapMetricsListener listener; - public LlapMetricsCollector(Configuration conf) { + public LlapMetricsCollector(Configuration conf, LlapRegistryService registry) { this( conf, Executors.newSingleThreadScheduledExecutor( new ThreadFactoryBuilder().setDaemon(true).setNameFormat(THREAD_NAME) .build()), -LlapManagementProtocolClientImplFactory.basicInstance(conf)); +LlapManagementProtocolClientImplFactory.basicInstance(conf), +registry); } @VisibleForTesting LlapMetricsCollector(Configuration conf, ScheduledExecutorService scheduledMetricsExecutor, - LlapManagementProtocolClientImplFactory clientFactory) { + LlapManagementProtocolClientImplFactory clientFactory, + LlapRegistryService registry) { this.scheduledMetricsExecutor = scheduledMetricsExecutor; this.clientFactory = clientFactory; this.llapClients = new HashMap<>(); this.instanceStatisticsMap = new ConcurrentHashMap<>(); this.metricsCollectionMs = HiveConf.getTimeVar(conf, HiveConf.ConfVars.LLAP_TASK_SCHEDULER_AM_COLLECT_DAEMON_METRICS_MS, TimeUnit.MILLISECONDS); +String listenerClass = HiveConf.getVar(conf, + HiveConf.ConfVars.LLAP_TASK_SCHEDULER_AM_COLLECT_DAEMON_METRICS_LISTENER); +if (Strings.isBlank(listenerClass)) { + listener = null; +} else { + try { +listener = (LlapMetricsListener)Class.forName(listenerClass.trim()).newInstance(); Review comment: Maybe the instance should be created through something like the ReflectionUtil? This would ensure that we have an accessible constructor and be the common way (I believe). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268725) Time Spent: 40m (was: 0.5h) > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 40m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?focusedWorklogId=268708&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268708 ] ASF GitHub Bot logged work on HIVE-21910: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:13 Start Date: 27/Jun/19 18:13 Worklog Time Spent: 10m Work Description: odraese commented on pull request #690: HIVE-21910: Multiple target location generation in HostAffinitySplitLocationProvider URL: https://github.com/apache/hive/pull/690#discussion_r298300222 ## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HostAffinitySplitLocationProvider.java ## @@ -52,13 +52,19 @@ private final List locations; private final Set locationSet; + private final int numberOfLocations; - public HostAffinitySplitLocationProvider(List knownLocations) { + public HostAffinitySplitLocationProvider(List knownLocations, int numberOfLocations) { Preconditions.checkState(knownLocations != null && !knownLocations.isEmpty(), HostAffinitySplitLocationProvider.class.getName() + " needs at least 1 location to function"); +Preconditions.checkArgument(numberOfLocations >= 0, Review comment: Why can numberOfLocations be zero. I would expect that the minimum is one? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268708) Time Spent: 40m (was: 0.5h) > Multiple target location generation in HostAffinitySplitLocationProvider > > > Key: HIVE-21910 > URL: https://issues.apache.org/jira/browse/HIVE-21910 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21910.patch > > Time Spent: 40m > Remaining Estimate: 0h > > We need to generate multiple target locations by > HostAffinitySplitLocationProvider, so we will have deterministic fallback > nodes in case the target node is disabled -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?focusedWorklogId=268709&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268709 ] ASF GitHub Bot logged work on HIVE-21910: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:13 Start Date: 27/Jun/19 18:13 Worklog Time Spent: 10m Work Description: odraese commented on pull request #690: HIVE-21910: Multiple target location generation in HostAffinitySplitLocationProvider URL: https://github.com/apache/hive/pull/690#discussion_r298297521 ## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HostAffinitySplitLocationProvider.java ## @@ -72,11 +78,17 @@ public HostAffinitySplitLocationProvider(List knownLocations) { FileSplit fsplit = (FileSplit) split; String splitDesc = "Split at " + fsplit.getPath() + " with offset= " + fsplit.getStart() + ", length=" + fsplit.getLength(); -List preferredLocations = preferLocations(fsplit); -String location = -preferredLocations.get(determineLocation(preferredLocations, fsplit.getPath().toString(), -fsplit.getStart(), splitDesc)); -return (location != null) ? new String[] { location } : null; +List preferredLocations = new ArrayList<>(preferLocations(fsplit)); Review comment: I think, we might want to consider rolling back the patch that introduced preferred locations in the first place (HIVE-21232). This caused noticeable skew within the LLAP cluster. Let's verify with @t3rmin4t0r but I would assume that we go back to original behavior, where we always use known locations. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268709) Time Spent: 50m (was: 40m) > Multiple target location generation in HostAffinitySplitLocationProvider > > > Key: HIVE-21910 > URL: https://issues.apache.org/jira/browse/HIVE-21910 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21910.patch > > Time Spent: 50m > Remaining Estimate: 0h > > We need to generate multiple target locations by > HostAffinitySplitLocationProvider, so we will have deterministic fallback > nodes in case the target node is disabled -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?focusedWorklogId=268711&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268711 ] ASF GitHub Bot logged work on HIVE-21910: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:13 Start Date: 27/Jun/19 18:13 Worklog Time Spent: 10m Work Description: odraese commented on pull request #690: HIVE-21910: Multiple target location generation in HostAffinitySplitLocationProvider URL: https://github.com/apache/hive/pull/690#discussion_r298302912 ## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HostAffinitySplitLocationProvider.java ## @@ -72,11 +78,17 @@ public HostAffinitySplitLocationProvider(List knownLocations) { FileSplit fsplit = (FileSplit) split; String splitDesc = "Split at " + fsplit.getPath() + " with offset= " + fsplit.getStart() + ", length=" + fsplit.getLength(); -List preferredLocations = preferLocations(fsplit); -String location = -preferredLocations.get(determineLocation(preferredLocations, fsplit.getPath().toString(), -fsplit.getStart(), splitDesc)); -return (location != null) ? new String[] { location } : null; +List preferredLocations = new ArrayList<>(preferLocations(fsplit)); +List finalLocations = new ArrayList<>(numberOfLocations); +// Generate new preferred locations until we need more, or we do not have any preferred +// location left +while (finalLocations.size() < numberOfLocations && preferredLocations.size() > 0) { + String nextLocation = preferredLocations.get(determineLocation(preferredLocations, + fsplit.getPath().toString(), fsplit.getStart(), splitDesc)); + finalLocations.add(nextLocation); + preferredLocations.remove(nextLocation); Review comment: Instead of having an own array list, from which we remove 1 for each split (which is quite often), we could simply pass down numberOfLocations (and decrement it) to determineLocation and change `int index = Hashing.consistentHash(hash1, locations.size());` to `int index = Hashing.consistentHash(hash1, numLocations);` in determineLocations. Seems to be more efficient. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268711) Time Spent: 50m (was: 40m) > Multiple target location generation in HostAffinitySplitLocationProvider > > > Key: HIVE-21910 > URL: https://issues.apache.org/jira/browse/HIVE-21910 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21910.patch > > Time Spent: 50m > Remaining Estimate: 0h > > We need to generate multiple target locations by > HostAffinitySplitLocationProvider, so we will have deterministic fallback > nodes in case the target node is disabled -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?focusedWorklogId=268706&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268706 ] ASF GitHub Bot logged work on HIVE-21910: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:13 Start Date: 27/Jun/19 18:13 Worklog Time Spent: 10m Work Description: odraese commented on pull request #690: HIVE-21910: Multiple target location generation in HostAffinitySplitLocationProvider URL: https://github.com/apache/hive/pull/690#discussion_r298298637 ## File path: llap-tez/src/test/org/apache/hadoop/hive/llap/tezplugins/TestLlapTaskSchedulerService.java ## @@ -946,6 +946,161 @@ public void testForcedLocalityUnknownHost() throws IOException, InterruptedExcep } } + @Test(timeout = 1) Review comment: Minor: 10s seems long for this test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268706) Time Spent: 20m (was: 10m) > Multiple target location generation in HostAffinitySplitLocationProvider > > > Key: HIVE-21910 > URL: https://issues.apache.org/jira/browse/HIVE-21910 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21910.patch > > Time Spent: 20m > Remaining Estimate: 0h > > We need to generate multiple target locations by > HostAffinitySplitLocationProvider, so we will have deterministic fallback > nodes in case the target node is disabled -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?focusedWorklogId=268710&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268710 ] ASF GitHub Bot logged work on HIVE-21910: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:13 Start Date: 27/Jun/19 18:13 Worklog Time Spent: 10m Work Description: odraese commented on pull request #690: HIVE-21910: Multiple target location generation in HostAffinitySplitLocationProvider URL: https://github.com/apache/hive/pull/690#discussion_r298304091 ## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HostAffinitySplitLocationProvider.java ## @@ -72,11 +78,17 @@ public HostAffinitySplitLocationProvider(List knownLocations) { FileSplit fsplit = (FileSplit) split; String splitDesc = "Split at " + fsplit.getPath() + " with offset= " + fsplit.getStart() + ", length=" + fsplit.getLength(); -List preferredLocations = preferLocations(fsplit); -String location = -preferredLocations.get(determineLocation(preferredLocations, fsplit.getPath().toString(), -fsplit.getStart(), splitDesc)); -return (location != null) ? new String[] { location } : null; +List preferredLocations = new ArrayList<>(preferLocations(fsplit)); +List finalLocations = new ArrayList<>(numberOfLocations); +// Generate new preferred locations until we need more, or we do not have any preferred +// location left +while (finalLocations.size() < numberOfLocations && preferredLocations.size() > 0) { + String nextLocation = preferredLocations.get(determineLocation(preferredLocations, + fsplit.getPath().toString(), fsplit.getStart(), splitDesc)); + finalLocations.add(nextLocation); + preferredLocations.remove(nextLocation); Review comment: Something else to consider: let's say that we generate multiple locations for each split now, we will run through the determineLocation multiple times (vs. once) to generate locations that most likely are not used later down the code path. This means that we spend more cycles on TezAM, potentially increasing the query execution time there always where the benefits are only seen in rejection cases This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268710) Time Spent: 50m (was: 40m) > Multiple target location generation in HostAffinitySplitLocationProvider > > > Key: HIVE-21910 > URL: https://issues.apache.org/jira/browse/HIVE-21910 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21910.patch > > Time Spent: 50m > Remaining Estimate: 0h > > We need to generate multiple target locations by > HostAffinitySplitLocationProvider, so we will have deterministic fallback > nodes in case the target node is disabled -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?focusedWorklogId=268707&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268707 ] ASF GitHub Bot logged work on HIVE-21910: - Author: ASF GitHub Bot Created on: 27/Jun/19 18:13 Start Date: 27/Jun/19 18:13 Worklog Time Spent: 10m Work Description: odraese commented on pull request #690: HIVE-21910: Multiple target location generation in HostAffinitySplitLocationProvider URL: https://github.com/apache/hive/pull/690#discussion_r298297862 ## File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java ## @@ -4440,6 +4440,12 @@ private static void populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal "preferring one of the locations provided by the split itself. If there is no llap daemon " + "running on any of those locations (or on the cloud), fall back to a cache affinity to" + " an LLAP node. This is effective only if hive.execution.mode is llap."), + LLAP_CLIENT_CONSISTENT_SPLITS_NUMBER("hive.llap.client.consistent.splits.number", 1, +"The number of the preferred locations to generate if hive.llap.client.consistent.splits\n" + Review comment: I would remove "preferred" from "preferred locations". See comment in HostAffinitiyLocationProvider below. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268707) Time Spent: 0.5h (was: 20m) > Multiple target location generation in HostAffinitySplitLocationProvider > > > Key: HIVE-21910 > URL: https://issues.apache.org/jira/browse/HIVE-21910 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21910.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > We need to generate multiple target locations by > HostAffinitySplitLocationProvider, so we will have deterministic fallback > nodes in case the target node is disabled -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874394#comment-16874394 ] Hive QA commented on HIVE-21910: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 35s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} llap-tez in master has 17 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 2s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} llap-tez: The patch generated 2 new + 85 unchanged - 0 fixed = 87 total (was 85) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 2 new + 41 unchanged - 1 fixed = 43 total (was 42) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 18s{color} | {color:red} ql generated 1 new + 2252 unchanged - 1 fixed = 2253 total (was 2253) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Null passed for non-null parameter of new java.util.HashSet(Collection) in new org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProvider(List, int) Method invoked at HostAffinitySplitLocationProvider.java:of new java.util.HashSet(Collection) in new org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProvider(List, int) Method invoked at HostAffinitySplitLocationProvider.java:[line 66] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17774/dev-support/hive-personality.sh | | git revision | master / e000e2f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17774/yetus/diff-checkstyle-llap-tez.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17774/yetus/diff-checkstyle-ql.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-17774/yetus/new-findbugs-ql.html | | modules | C: common llap-tez ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17774/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated.
[jira] [Commented] (HIVE-21437) Vectorization: Decimal64 division with integer columns
[ https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874382#comment-16874382 ] Attila Magyar commented on HIVE-21437: -- Hey [~teddy.choi], I noticed the issue is patch available and the patch is not committed. Is this still work in progress or is it already finished yet uncommitted? > Vectorization: Decimal64 division with integer columns > -- > > Key: HIVE-21437 > URL: https://issues.apache.org/jira/browse/HIVE-21437 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 4.0.0 >Reporter: Gopal V >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, > HIVE-21437.3.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Vectorizer fails for > {code} > CREATE temporary TABLE `catalog_Sales`( > `cs_quantity` int, > `cs_wholesale_cost` decimal(7,2), > `cs_list_price` decimal(7,2), > `cs_sales_price` decimal(7,2), > `cs_ext_discount_amt` decimal(7,2), > `cs_ext_sales_price` decimal(7,2), > `cs_ext_wholesale_cost` decimal(7,2), > `cs_ext_list_price` decimal(7,2), > `cs_ext_tax` decimal(7,2), > `cs_coupon_amt` decimal(7,2), > `cs_ext_ship_cost` decimal(7,2), > `cs_net_paid` decimal(7,2), > `cs_net_paid_inc_tax` decimal(7,2), > `cs_net_paid_inc_ship` decimal(7,2), > `cs_net_paid_inc_ship_tax` decimal(7,2), > `cs_net_profit` decimal(7,2)) > ; > explain vectorization detail select maxcs_ext_list_price - > cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from > catalog_sales; > {code} > {code} > 'Map Vectorization:' > 'enabled: true' > 'enabledConditionsMet: > hive.vectorized.use.vectorized.input.format IS true' > 'inputFileFormats: > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > 'notVectorizedReason: SELECT operator: Could not instantiate > DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], > argument classes: [Integer, Integer, Integer], exception: > java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 > stack trace: > sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown > Source), > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45), > java.lang.reflect.Constructor.newInstance(Constructor.java:423), > org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:1990), > > org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:1963), > ...' > 'vectorized: false' > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21874) Implement add partitions related methods on temporary table
[ https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874363#comment-16874363 ] Hive QA commented on HIVE-21874: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973057/HIVE-18735.05.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16357 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17773/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17773/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17773/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973057 - PreCommit-HIVE-Build > Implement add partitions related methods on temporary table > --- > > Key: HIVE-21874 > URL: https://issues.apache.org/jira/browse/HIVE-21874 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-18735.05.patch, HIVE-21874.01.patch, > HIVE-21874.02.patch, HIVE-21874.03.patch, HIVE-21874.04.patch > > > IMetaStoreClient exposes the following add partition related methods: > {code:java} > Partition add_partition(Partition partition); > int add_partitions(List partitions); > int add_partitions_pspec(PartitionSpecProxy partitionSpec); > List add_partitions(List partitions, boolean > ifNotExists, boolean needResults); > {code} > These methods should be implemented in order to handle addition of partitions > to temporary tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21927) HiveServer Web UI: Setting the HttpOnly option in the cookies
[ https://issues.apache.org/jira/browse/HIVE-21927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-21927: -- Status: Open (was: Patch Available) > HiveServer Web UI: Setting the HttpOnly option in the cookies > - > > Key: HIVE-21927 > URL: https://issues.apache.org/jira/browse/HIVE-21927 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21927.01.patch, HIVE-21927.patch > > > Intend of this JIRA is to introduce the HttpOnly option in the cookie. > cookie: before change > {code:java} > hdp32bFALSE / FALSE 0 JSESSIONID > 8dkibwayfnrc4y4hvpu3vh74 > {code} > after change: > {code:java} > #HttpOnly_hdp32b FALSE / FALSE 0 JSESSIONID > e1npdkbo3inj1xnd6gdc6ihws > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21927) HiveServer Web UI: Setting the HttpOnly option in the cookies
[ https://issues.apache.org/jira/browse/HIVE-21927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-21927: -- Attachment: HIVE-21927.01.patch Status: Patch Available (was: Open) > HiveServer Web UI: Setting the HttpOnly option in the cookies > - > > Key: HIVE-21927 > URL: https://issues.apache.org/jira/browse/HIVE-21927 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21927.01.patch, HIVE-21927.patch > > > Intend of this JIRA is to introduce the HttpOnly option in the cookie. > cookie: before change > {code:java} > hdp32bFALSE / FALSE 0 JSESSIONID > 8dkibwayfnrc4y4hvpu3vh74 > {code} > after change: > {code:java} > #HttpOnly_hdp32b FALSE / FALSE 0 JSESSIONID > e1npdkbo3inj1xnd6gdc6ihws > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21930) WINDOW COUNT DISTINCT return wrong value with PARTITION BY
[ https://issues.apache.org/jira/browse/HIVE-21930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor updated HIVE-21930: Component/s: PTF-Windowing > WINDOW COUNT DISTINCT return wrong value with PARTITION BY > -- > > Key: HIVE-21930 > URL: https://issues.apache.org/jira/browse/HIVE-21930 > Project: Hive > Issue Type: Bug > Components: PTF-Windowing >Affects Versions: 3.1.0 > Environment: Beeline version 3.1.0.3.0.1.0-187 by Apache Hive >Reporter: Igor >Priority: Major > Labels: distinct, window_funcion > > count(distinct a) over (partiton by b) return wring result. For example: > {code:java} > select p, day, ts > , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number > , count(1) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND > UNBOUNDED FOLLOWING) as lines > , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED > PRECEDING AND UNBOUNDED FOLLOWING) as days > FROM T{code} > WINDOW specification doesn't affect on results: same wrong with and without > window. > count(1) and count(distinct day) return the same result. Count distinct is > wrong. > > I've add size(collect_set(day) OVER (PARTITION BY phone)) as days2 and > count(distinct return correct result. > Following query return non-empty result: > {code:java} > select A.*, B.days, B. from ( > select p, day, ts > , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number > , count(1) OVER (PARTITION BY p ROWS BETWEEN UNBOUNDED PRECEDING AND > UNBOUNDED FOLLOWING) as lines > , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED > PRECEDING AND UNBOUNDED FOLLOWING) as days > , size(collect_set(day) OVER (PARTITION BY phone)) as days2 > , dense_rank() over (partition by phone order by day) + dense_rank() over > (partition by phone order by day desc) - 1 as days3 > FROM T ) as A > join ( > select p, day, ts > , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number > , count(1) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND > UNBOUNDED FOLLOWING) as lines > , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED > PRECEDING AND UNBOUNDED FOLLOWING) as days > FROM T > ) as B on A.p=B.p and A.line_number=B.line_number > where A.days!=B.days > order by A.p, A.line_number > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21874) Implement add partitions related methods on temporary table
[ https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874313#comment-16874313 ] Hive QA commented on HIVE-21874: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 59s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17773/dev-support/hive-personality.sh | | git revision | master / e000e2f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql hbase-handler U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17773/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Implement add partitions related methods on temporary table > --- > > Key: HIVE-21874 > URL: https://issues.apache.org/jira/browse/HIVE-21874 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-18735.05.patch, HIVE-21874.01.patch, > HIVE-21874.02.patch, HIVE-21874.03.patch, HIVE-21874.04.patch > > > IMetaStoreClient exposes the following add partition related methods: > {code:java} > Partition add_partition(Partition partition); > int add_partitions(List partitions); > int add_partitions_pspec(PartitionSpecProxy partitionSpec); > List add_partitions(List partitions, boolean > ifNotExists, boolean needResults); > {code} > These methods should be implemented in order to handle addition of partitions > to temporary tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
[ https://issues.apache.org/jira/browse/HIVE-21880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Bapat updated HIVE-21880: -- Attachment: HIVE-21880.02.patch Status: Patch Available (was: In Progress) The earlier failures indicate that we can not execute LOCK TABLE through JDOQuery.execute() interface. Instead use MetaStoreDirectSql.executeNoResult() whenever possible. > Enable flaky test > TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites. > --- > > Key: HIVE-21880 > URL: https://issues.apache.org/jira/browse/HIVE-21880 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: Sankar Hariappan >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Need tp enable > TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites > which is disabled as it is flaky and randomly failing with below error. > {code} > Error Message > Notification events are missing in the meta store. > Stacktrace > java.lang.IllegalStateException: Notification events are missing in the meta > store. > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) > at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:282) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:265) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:289) > at > org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites(TestReplicationScenariosAcidTablesBootstrap.java:328) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.
[jira] [Updated] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
[ https://issues.apache.org/jira/browse/HIVE-21880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Bapat updated HIVE-21880: -- Status: In Progress (was: Patch Available) > Enable flaky test > TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites. > --- > > Key: HIVE-21880 > URL: https://issues.apache.org/jira/browse/HIVE-21880 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: Sankar Hariappan >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21880.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Need tp enable > TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites > which is disabled as it is flaky and randomly failing with below error. > {code} > Error Message > Notification events are missing in the meta store. > Stacktrace > java.lang.IllegalStateException: Notification events are missing in the meta > store. > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) > at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:282) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:265) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:289) > at > org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites(TestReplicationScenariosAcidTablesBootstrap.java:328) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassR
[jira] [Updated] (HIVE-21886) REPL - With table list - Handle rename events during replace policy
[ https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21886: --- Status: Patch Available (was: Open) > REPL - With table list - Handle rename events during replace policy > --- > > Key: HIVE-21886 > URL: https://issues.apache.org/jira/browse/HIVE-21886 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21886.01.patch, HIVE-21886.02.patch, > HIVE-21886.03.patch, HIVE-21886.04.patch, HIVE-21886.04.patch > > Time Spent: 11h 10m > Remaining Estimate: 0h > > If some rename events are found to be dumped and replayed while replace > policy is getting executed, it needs to take care of the policy inclusion in > both the policy for each table name. > 1. Create a list of tables to be bootstrapped. > 2. During handling of alter table, if the alter type is rename > 1. If the old table name is present in the list of table to be > bootstrapped, remove it. > 2. If the new table name, matches the new policy, add it to the list > of tables to be bootstrapped. > 3. If the old table does not match the old policy drop it, even if the > table is not present at target. > 3. During handling of drop table > 1. if the table is in the list of tables to be bootstrapped, then > remove it and ignore the event. > 4. During other event handling > 1. if the table is there in the list of tables to be bootstrapped, > then ignore the event. > 2. If the new policy does not match the table name, then ignore the > event. > > Rename handling during replace policy > # Old name not matching old policy – The old table will not be there at the > target cluster. The table will not be returned by get-all-table. > ## Old name is not matching new policy > ### New name not matching old policy > New name not matching new policy > * Ignore the event, no need to do anything. > New name matching new policy > * The table will be returned by get-all-table. Replace policy handler > will bootstrap this table as its matching new policy and not matching old > policy. > * All the future events will be ignored as part of check added by > replace policy handling. > * All the event with old table name will anyways be ignored as the old > name is not matching the new policy. > ### New name matching old policy > New name not matching new policy > * As the new name is not matching the new policy, the table need not be > replicated. > * As the old name is not matching the new policy, the rename events will > be ignored. > * So nothing to be done for this scenario. > New name matching new policy > * As the new name is matching both old and new policy, replace handler > will not bootstrap the table. > * Add the table to the list of tables to be bootstrapped. > * Ignore all the events with new name. > * If there is a drop event for the table (with new name), then remove > the table from the the list of table to be bootstrapped. > * In case of rename event (double rename) > ** If the new name satisfies the table pattern, then add the new name to > the list of tables to be bootstrapped and remove the old name from the list > of tables to be bootstrapped. > ** If the new name does not satisfies then just removed the table name > from the list of tables to be bootstrapped. > ## Old name is matching new policy – As per replace policy handler, which > checks based on old table, the table should be bootstrapped and event should > be ignored. But rename handler should decide based on new name.The old table > name will not be returned by get-all-table, so replace handler will not d > anything for the old table. > ### New name not matching old policy > New name not matching new policy > * As the old table is not there at target and new name is not matching > new policy. Ignore the event. > * No need to add the table to the list of tables to be bootstrapped. > * All the subsequent events will be ignored as the new name is not > matching the new policy. > New name matching new policy > * As the new name is not matching old policy but matching new policy, > the table will be bootstrapped by replace policy handler. So rename event > need not add this table to list of table to be bootstrapped. > * All the future events will be ignored by replace policy handler. > * For rename event (double rename) > ** If there is a rename, the table (with intermittent new name) will not >
[jira] [Updated] (HIVE-21886) REPL - With table list - Handle rename events during replace policy
[ https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21886: --- Attachment: HIVE-21886.04.patch > REPL - With table list - Handle rename events during replace policy > --- > > Key: HIVE-21886 > URL: https://issues.apache.org/jira/browse/HIVE-21886 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21886.01.patch, HIVE-21886.02.patch, > HIVE-21886.03.patch, HIVE-21886.04.patch, HIVE-21886.04.patch > > Time Spent: 11h 10m > Remaining Estimate: 0h > > If some rename events are found to be dumped and replayed while replace > policy is getting executed, it needs to take care of the policy inclusion in > both the policy for each table name. > 1. Create a list of tables to be bootstrapped. > 2. During handling of alter table, if the alter type is rename > 1. If the old table name is present in the list of table to be > bootstrapped, remove it. > 2. If the new table name, matches the new policy, add it to the list > of tables to be bootstrapped. > 3. If the old table does not match the old policy drop it, even if the > table is not present at target. > 3. During handling of drop table > 1. if the table is in the list of tables to be bootstrapped, then > remove it and ignore the event. > 4. During other event handling > 1. if the table is there in the list of tables to be bootstrapped, > then ignore the event. > 2. If the new policy does not match the table name, then ignore the > event. > > Rename handling during replace policy > # Old name not matching old policy – The old table will not be there at the > target cluster. The table will not be returned by get-all-table. > ## Old name is not matching new policy > ### New name not matching old policy > New name not matching new policy > * Ignore the event, no need to do anything. > New name matching new policy > * The table will be returned by get-all-table. Replace policy handler > will bootstrap this table as its matching new policy and not matching old > policy. > * All the future events will be ignored as part of check added by > replace policy handling. > * All the event with old table name will anyways be ignored as the old > name is not matching the new policy. > ### New name matching old policy > New name not matching new policy > * As the new name is not matching the new policy, the table need not be > replicated. > * As the old name is not matching the new policy, the rename events will > be ignored. > * So nothing to be done for this scenario. > New name matching new policy > * As the new name is matching both old and new policy, replace handler > will not bootstrap the table. > * Add the table to the list of tables to be bootstrapped. > * Ignore all the events with new name. > * If there is a drop event for the table (with new name), then remove > the table from the the list of table to be bootstrapped. > * In case of rename event (double rename) > ** If the new name satisfies the table pattern, then add the new name to > the list of tables to be bootstrapped and remove the old name from the list > of tables to be bootstrapped. > ** If the new name does not satisfies then just removed the table name > from the list of tables to be bootstrapped. > ## Old name is matching new policy – As per replace policy handler, which > checks based on old table, the table should be bootstrapped and event should > be ignored. But rename handler should decide based on new name.The old table > name will not be returned by get-all-table, so replace handler will not d > anything for the old table. > ### New name not matching old policy > New name not matching new policy > * As the old table is not there at target and new name is not matching > new policy. Ignore the event. > * No need to add the table to the list of tables to be bootstrapped. > * All the subsequent events will be ignored as the new name is not > matching the new policy. > New name matching new policy > * As the new name is not matching old policy but matching new policy, > the table will be bootstrapped by replace policy handler. So rename event > need not add this table to list of table to be bootstrapped. > * All the future events will be ignored by replace policy handler. > * For rename event (double rename) > ** If there is a rename, the table (with intermittent new name) will not > be pr
[jira] [Updated] (HIVE-21886) REPL - With table list - Handle rename events during replace policy
[ https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21886: --- Status: Open (was: Patch Available) > REPL - With table list - Handle rename events during replace policy > --- > > Key: HIVE-21886 > URL: https://issues.apache.org/jira/browse/HIVE-21886 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21886.01.patch, HIVE-21886.02.patch, > HIVE-21886.03.patch, HIVE-21886.04.patch, HIVE-21886.04.patch > > Time Spent: 11h 10m > Remaining Estimate: 0h > > If some rename events are found to be dumped and replayed while replace > policy is getting executed, it needs to take care of the policy inclusion in > both the policy for each table name. > 1. Create a list of tables to be bootstrapped. > 2. During handling of alter table, if the alter type is rename > 1. If the old table name is present in the list of table to be > bootstrapped, remove it. > 2. If the new table name, matches the new policy, add it to the list > of tables to be bootstrapped. > 3. If the old table does not match the old policy drop it, even if the > table is not present at target. > 3. During handling of drop table > 1. if the table is in the list of tables to be bootstrapped, then > remove it and ignore the event. > 4. During other event handling > 1. if the table is there in the list of tables to be bootstrapped, > then ignore the event. > 2. If the new policy does not match the table name, then ignore the > event. > > Rename handling during replace policy > # Old name not matching old policy – The old table will not be there at the > target cluster. The table will not be returned by get-all-table. > ## Old name is not matching new policy > ### New name not matching old policy > New name not matching new policy > * Ignore the event, no need to do anything. > New name matching new policy > * The table will be returned by get-all-table. Replace policy handler > will bootstrap this table as its matching new policy and not matching old > policy. > * All the future events will be ignored as part of check added by > replace policy handling. > * All the event with old table name will anyways be ignored as the old > name is not matching the new policy. > ### New name matching old policy > New name not matching new policy > * As the new name is not matching the new policy, the table need not be > replicated. > * As the old name is not matching the new policy, the rename events will > be ignored. > * So nothing to be done for this scenario. > New name matching new policy > * As the new name is matching both old and new policy, replace handler > will not bootstrap the table. > * Add the table to the list of tables to be bootstrapped. > * Ignore all the events with new name. > * If there is a drop event for the table (with new name), then remove > the table from the the list of table to be bootstrapped. > * In case of rename event (double rename) > ** If the new name satisfies the table pattern, then add the new name to > the list of tables to be bootstrapped and remove the old name from the list > of tables to be bootstrapped. > ** If the new name does not satisfies then just removed the table name > from the list of tables to be bootstrapped. > ## Old name is matching new policy – As per replace policy handler, which > checks based on old table, the table should be bootstrapped and event should > be ignored. But rename handler should decide based on new name.The old table > name will not be returned by get-all-table, so replace handler will not d > anything for the old table. > ### New name not matching old policy > New name not matching new policy > * As the old table is not there at target and new name is not matching > new policy. Ignore the event. > * No need to add the table to the list of tables to be bootstrapped. > * All the subsequent events will be ignored as the new name is not > matching the new policy. > New name matching new policy > * As the new name is not matching old policy but matching new policy, > the table will be bootstrapped by replace policy handler. So rename event > need not add this table to list of table to be bootstrapped. > * All the future events will be ignored by replace policy handler. > * For rename event (double rename) > ** If there is a rename, the table (with intermittent new name) will not >
[jira] [Commented] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874286#comment-16874286 ] Jesus Camacho Rodriguez commented on HIVE-21928: [~kgyrtkirk], I do not think the patch I uploaded provides the right solution, I just put it together to run some tests. Basically, with this patch, we will be scaling the ndv for top level AND as we were doing before HIVE-20260 went in, which is not what we want either. And I still think there may be some issues with scaling of ndv in the presence of nested ANDs (basically, we would be skipping the scaling of some of the columns). We could revert HIVE-20260 for the time being, but that will cause some regressions too. A proposal to fix this is to rewrite the scaling logic to compute a reduction ratio per column instead of global. > Fix for statistics annotation in nested AND expressions > --- > > Key: HIVE-21928 > URL: https://issues.apache.org/jira/browse/HIVE-21928 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Attachments: HIVE-21928.patch > > > Discovered while working on HIVE-21867. Having predicates with nested AND > expressions may result in different stats, even if predicates are basically > similar (from stats estimation standpoint). > For instance, stats for {{AND(x=5, true, true)}} are different from {{x=5}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21929) Hive on Tez requers explicite set of property hive.tez.container.size
[ https://issues.apache.org/jira/browse/HIVE-21929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874278#comment-16874278 ] Hive QA commented on HIVE-21929: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973056/HIVE-21929.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16357 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17772/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17772/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17772/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973056 - PreCommit-HIVE-Build > Hive on Tez requers explicite set of property hive.tez.container.size > - > > Key: HIVE-21929 > URL: https://issues.apache.org/jira/browse/HIVE-21929 > Project: Hive > Issue Type: Bug >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Attachments: HIVE-21929.1.patch > > > Without the explicit setting of the property {{hive.tez.container.size}} Tez > client submit to the YARN memory size as "-1". After that container creation > is rejected by YARN. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874248#comment-16874248 ] Zoltan Haindrich commented on HIVE-21928: - Yes, I agree - it was only correct for the last term of ANDs; the situation before HIVE-20260 was that it scaled everything down too much...I remember wanting to get back to that clearAffectedColumns() call; I should have placed there a FIXME - to alert me later... I think the following continue block should be removed; even thru the rowcount is not changed; the affectedcolumns might have, is there any reason I don't see why we should do it? {code} if (evaluatedRowCount == newNumRows) { continue; } {code} > Fix for statistics annotation in nested AND expressions > --- > > Key: HIVE-21928 > URL: https://issues.apache.org/jira/browse/HIVE-21928 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Attachments: HIVE-21928.patch > > > Discovered while working on HIVE-21867. Having predicates with nested AND > expressions may result in different stats, even if predicates are basically > similar (from stats estimation standpoint). > For instance, stats for {{AND(x=5, true, true)}} are different from {{x=5}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21912) Implement DisablingDaemonStatisticsHandler
[ https://issues.apache.org/jira/browse/HIVE-21912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21912: -- Attachment: HIVE-21912.wip.patch > Implement DisablingDaemonStatisticsHandler > -- > > Key: HIVE-21912 > URL: https://issues.apache.org/jira/browse/HIVE-21912 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Priority: Major > Attachments: HIVE-21912.wip.patch > > > We should implement a DaemonStatisticsHandler which: > * If a node average response time is bigger than 150% (configurable) of the > other nodes > * If the other nodes has enough empty executors to handle the requests > Then disables the limping node. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21929) Hive on Tez requers explicite set of property hive.tez.container.size
[ https://issues.apache.org/jira/browse/HIVE-21929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874228#comment-16874228 ] Hive QA commented on HIVE-21929: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 6s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} ql: The patch generated 0 new + 39 unchanged - 2 fixed = 39 total (was 41) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17772/dev-support/hive-personality.sh | | git revision | master / e000e2f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17772/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Hive on Tez requers explicite set of property hive.tez.container.size > - > > Key: HIVE-21929 > URL: https://issues.apache.org/jira/browse/HIVE-21929 > Project: Hive > Issue Type: Bug >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Attachments: HIVE-21929.1.patch > > > Without the explicit setting of the property {{hive.tez.container.size}} Tez > client submit to the YARN memory size as "-1". After that container creation > is rejected by YARN. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21930) WINDOW COUNT DISTINCT return wrong value with PARTITION BY
[ https://issues.apache.org/jira/browse/HIVE-21930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor updated HIVE-21930: Description: count(distinct a) over (partiton by b) return wring result. For example: {code:java} select p, day, ts , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number , count(1) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lines , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as days FROM T{code} WINDOW specification doesn't affect on results: same wrong with and without window. count(1) and count(distinct day) return the same result. Count distinct is wrong. I've add size(collect_set(day) OVER (PARTITION BY phone)) as days2 and count(distinct return correct result. Following query return non-empty result: {code:java} select A.*, B.days, B. from ( select p, day, ts , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number , count(1) OVER (PARTITION BY p ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lines , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as days , size(collect_set(day) OVER (PARTITION BY phone)) as days2 , dense_rank() over (partition by phone order by day) + dense_rank() over (partition by phone order by day desc) - 1 as days3 FROM T ) as A join ( select p, day, ts , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number , count(1) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lines , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as days FROM T ) as B on A.p=B.p and A.line_number=B.line_number where A.days!=B.days order by A.p, A.line_number {code} was: count(distinct a) over (partiton by b) return wring result. For example: {code:java} select p, day, ts , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number , count(1) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lines , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as days FROM T{code} count(1) and count(distinct day) return the same result. Count distinct is wrong. I've add size(collect_set(day) OVER (PARTITION BY phone)) as days2 and count(distinct return correct result. Following query return non-empty result: {code:java} select A.*, B.days, B. from ( select p, day, ts , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number , count(1) OVER (PARTITION BY p ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lines , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as days , size(collect_set(day) OVER (PARTITION BY phone)) as days2 , dense_rank() over (partition by phone order by day) + dense_rank() over (partition by phone order by day desc) - 1 as days3 FROM T ) as A join ( select p, day, ts , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number , count(1) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lines , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as days FROM T ) as B on A.p=B.p and A.line_number=B.line_number where A.days!=B.days order by A.p, A.line_number {code} > WINDOW COUNT DISTINCT return wrong value with PARTITION BY > -- > > Key: HIVE-21930 > URL: https://issues.apache.org/jira/browse/HIVE-21930 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 > Environment: Beeline version 3.1.0.3.0.1.0-187 by Apache Hive >Reporter: Igor >Priority: Major > Labels: distinct, window_funcion > > count(distinct a) over (partiton by b) return wring result. For example: > {code:java} > select p, day, ts > , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number > , count(1) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED PRECEDING AND > UNBOUNDED FOLLOWING) as lines > , count(distinct day) OVER (PARTITION BY phone ROWS BETWEEN UNBOUNDED > PRECEDING AND UNBOUNDED FOLLOWING) as days > FROM T{code} > WINDOW specification doesn't affect on results: same wrong with and without > window. > count(1) and count(distinct day) return the same result. Count distinct is > wrong. > > I've add size(collect_set(day) OVER (PARTITION BY phone)) as days2 and > count(distinct return correct result. > Following query return non-empty result: > {code:java} > select A.*, B.days, B. from ( > select p, day, ts > , row_number() OVER (PARTITION BY phone ORDER BY ts ASC) as line_number > , count(1) OVER (PARTITION BY p ROWS BETWEEN UNBOUNDED PREC
[jira] [Commented] (HIVE-21886) REPL - With table list - Handle rename events during replace policy
[ https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874201#comment-16874201 ] Hive QA commented on HIVE-21886: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973052/HIVE-21886.04.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16330 tests executed *Failed tests:* {noformat} TestReplAcrossInstancesWithJsonMessageFormat - did not produce a TEST-*.xml file (likely timed out) (batchId=255) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_2] (batchId=194) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17771/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17771/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17771/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973052 - PreCommit-HIVE-Build > REPL - With table list - Handle rename events during replace policy > --- > > Key: HIVE-21886 > URL: https://issues.apache.org/jira/browse/HIVE-21886 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21886.01.patch, HIVE-21886.02.patch, > HIVE-21886.03.patch, HIVE-21886.04.patch > > Time Spent: 11h 10m > Remaining Estimate: 0h > > If some rename events are found to be dumped and replayed while replace > policy is getting executed, it needs to take care of the policy inclusion in > both the policy for each table name. > 1. Create a list of tables to be bootstrapped. > 2. During handling of alter table, if the alter type is rename > 1. If the old table name is present in the list of table to be > bootstrapped, remove it. > 2. If the new table name, matches the new policy, add it to the list > of tables to be bootstrapped. > 3. If the old table does not match the old policy drop it, even if the > table is not present at target. > 3. During handling of drop table > 1. if the table is in the list of tables to be bootstrapped, then > remove it and ignore the event. > 4. During other event handling > 1. if the table is there in the list of tables to be bootstrapped, > then ignore the event. > 2. If the new policy does not match the table name, then ignore the > event. > > Rename handling during replace policy > # Old name not matching old policy – The old table will not be there at the > target cluster. The table will not be returned by get-all-table. > ## Old name is not matching new policy > ### New name not matching old policy > New name not matching new policy > * Ignore the event, no need to do anything. > New name matching new policy > * The table will be returned by get-all-table. Replace policy handler > will bootstrap this table as its matching new policy and not matching old > policy. > * All the future events will be ignored as part of check added by > replace policy handling. > * All the event with old table name will anyways be ignored as the old > name is not matching the new policy. > ### New name matching old policy > New name not matching new policy > * As the new name is not matching the new policy, the table need not be > replicated. > * As the old name is not matching the new policy, the rename events will > be ignored. > * So nothing to be done for this scenario. > New name matching new policy > * As the new name is matching both old and new policy, replace handler > will not bootstrap the table. > * Add the table to the list of tables to be bootstrapped. > * Ignore all the events with new name. > * If there is a drop event for the table (with new name), then remove > the table from the the list of table to be bootstrapped. > * In case of rename event (double rename) > ** If the new name satisfies the table pattern, then add the new name to > the list of tables to be bootstrapped and remove the old name from the list > of tables to be bootstrapped. > ** If the new name does not satisfies then just removed the t
[jira] [Commented] (HIVE-21886) REPL - With table list - Handle rename events during replace policy
[ https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874161#comment-16874161 ] Hive QA commented on HIVE-21886: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 8s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17771/dev-support/hive-personality.sh | | git revision | master / e000e2f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17771/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > REPL - With table list - Handle rename events during replace policy > --- > > Key: HIVE-21886 > URL: https://issues.apache.org/jira/browse/HIVE-21886 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21886.01.patch, HIVE-21886.02.patch, > HIVE-21886.03.patch, HIVE-21886.04.patch > > Time Spent: 11h 10m > Remaining Estimate: 0h > > If some rename events are found to be dumped and replayed while replace > policy is getting executed, it needs to take care of the policy inclusion in > both the policy for each table name. > 1. Create a list of tables to be bootstrapped. > 2. During handling of alter table, if the alter type is rename > 1. If the old table name is present in the list of table to be > bootstrapped, remove it. > 2. If the new table name, matches the new policy, add it to the list > of tables to be bootstrapped. > 3. If the old
[jira] [Updated] (HIVE-18735) Create table like loses transactional attribute
[ https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Pinter updated HIVE-18735: - Attachment: HIVE-18735.06.patch > Create table like loses transactional attribute > --- > > Key: HIVE-18735 > URL: https://issues.apache.org/jira/browse/HIVE-18735 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Eugene Koifman >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch, > HIVE-18735.03.patch, HIVE-18735.04.patch, HIVE-18735.05.patch, > HIVE-18735.06.patch > > > {noformat} > create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc > TBLPROPERTIES ('transactional'='true')"; > create table T like T1; > show create table T ; > CREATE TABLE `T`( > `a` int, > `b` int) > CLUSTERED BY ( > a) > INTO 2 BUCKETS > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > > 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518813536099/warehouse/t' > TBLPROPERTIES ( > 'transient_lastDdlTime'='1518813564') > {noformat} > Specifying props explicitly does work > {noformat} > create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc > TBLPROPERTIES ('transactional'='true')"; > create table T like T1 TBLPROPERTIES ('transactional'='true'); > show create table T ; > CREATE TABLE `T`( > `a` int, > `b` int) > CLUSTERED BY ( > a) > INTO 2 BUCKETS > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > > 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518814098564/warehouse/t' > TBLPROPERTIES ( > 'transactional'='true', > 'transactional_properties'='default', > 'transient_lastDdlTime'='1518814111') > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21790) Bump Java to 1.8
[ https://issues.apache.org/jira/browse/HIVE-21790?focusedWorklogId=268521&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268521 ] ASF GitHub Bot logged work on HIVE-21790: - Author: ASF GitHub Bot Created on: 27/Jun/19 13:14 Start Date: 27/Jun/19 13:14 Worklog Time Spent: 10m Work Description: Fokko commented on pull request #692: HIVE-21790 Bump Java to 1.8 URL: https://github.com/apache/hive/pull/692 https://jira.apache.org/jira/browse/HIVE-21790 We're using Hive for reading Parquet files, but we would like to move from gzip to zstandard compression. Currently, the Parquet support of Hive is old because we can't upgrade since Parquet is Java 1.8+. Therefore it is a good idea to upgrade Hive as well. GA support of Java 1.7 is also almost over: https://www.oracle.com/technetwork/java/java-se-support-roadmap.html This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268521) Time Spent: 0.5h (was: 20m) > Bump Java to 1.8 > > > Key: HIVE-21790 > URL: https://issues.apache.org/jira/browse/HIVE-21790 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.3.5 >Reporter: Fokko Driesprong >Assignee: Fokko Driesprong >Priority: Major > Labels: pull-request-available > Attachments: 0001-HIVE-21790-Update-to-Java-1.8.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > We're using Hive for reading Parquet files, but we would like to move from > gzip to zstandard compression. Currently, the Parquet support of Hive is old > because we can't upgrade since Parquet is Java 1.8+. Therefore it is a good > idea to upgrade Hive as well. > GA support of Java 1.7 is also almost over: > https://www.oracle.com/technetwork/java/java-se-support-roadmap.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18735) Create table like loses transactional attribute
[ https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874136#comment-16874136 ] Hive QA commented on HIVE-18735: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973051/HIVE-18735.05.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16325 tests executed *Failed tests:* {noformat} TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestObjectStore - did not produce a TEST-*.xml file (likely timed out) (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17770/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17770/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17770/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973051 - PreCommit-HIVE-Build > Create table like loses transactional attribute > --- > > Key: HIVE-18735 > URL: https://issues.apache.org/jira/browse/HIVE-18735 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Eugene Koifman >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch, > HIVE-18735.03.patch, HIVE-18735.04.patch, HIVE-18735.05.patch > > > {noformat} > create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc > TBLPROPERTIES ('transactional'='true')"; > create table T like T1; > show create table T ; > CREATE TABLE `T`( > `a` int, > `b` int) > CLUSTERED BY ( > a) > INTO 2 BUCKETS > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > > 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518813536099/warehouse/t' > TBLPROPERTIES ( > 'transient_lastDdlTime'='1518813564') > {noformat} > Specifying props explicitly does work > {noformat} > create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc > TBLPROPERTIES ('transactional'='true')"; > create table T like T1 TBLPROPERTIES ('transactional'='true'); > show create table T ; > CREATE TABLE `T`( > `a` int, > `b` int) > CLUSTERED BY ( > a) > INTO 2 BUCKETS > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > > 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518814098564/warehouse/t' > TBLPROPERTIES ( > 'transactional'='true', > 'transactional_properties'='default', > 'transient_lastDdlTime'='1518814111') > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21790) Bump Java to 1.8
[ https://issues.apache.org/jira/browse/HIVE-21790?focusedWorklogId=268519&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268519 ] ASF GitHub Bot logged work on HIVE-21790: - Author: ASF GitHub Bot Created on: 27/Jun/19 13:11 Start Date: 27/Jun/19 13:11 Worklog Time Spent: 10m Work Description: Fokko commented on pull request #645: HIVE-21790 Bump Java to 1.8 URL: https://github.com/apache/hive/pull/645 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268519) Time Spent: 20m (was: 10m) > Bump Java to 1.8 > > > Key: HIVE-21790 > URL: https://issues.apache.org/jira/browse/HIVE-21790 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.3.5 >Reporter: Fokko Driesprong >Assignee: Fokko Driesprong >Priority: Major > Labels: pull-request-available > Attachments: 0001-HIVE-21790-Update-to-Java-1.8.patch > > Time Spent: 20m > Remaining Estimate: 0h > > We're using Hive for reading Parquet files, but we would like to move from > gzip to zstandard compression. Currently, the Parquet support of Hive is old > because we can't upgrade since Parquet is Java 1.8+. Therefore it is a good > idea to upgrade Hive as well. > GA support of Java 1.7 is also almost over: > https://www.oracle.com/technetwork/java/java-se-support-roadmap.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21911: -- Attachment: HIVE-21911.patch > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21911: -- Attachment: (was: HIVE-21911.patch) > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18735) Create table like loses transactional attribute
[ https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874070#comment-16874070 ] Hive QA commented on HIVE-18735: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 0s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17770/dev-support/hive-personality.sh | | git revision | master / e000e2f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql hbase-handler U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17770/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create table like loses transactional attribute > --- > > Key: HIVE-18735 > URL: https://issues.apache.org/jira/browse/HIVE-18735 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Eugene Koifman >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch, > HIVE-18735.03.patch, HIVE-18735.04.patch, HIVE-18735.05.patch > > > {noformat} > create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc > TBLPROPERTIES ('transactional'='true')"; > create table T like T1; > show create table T ; > CREATE TABLE `T`( > `a` int, > `b` int) > CLUSTERED BY ( > a) > INTO 2 BUCKETS > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > > 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518813536099/warehouse/t' > TBLPROPERTIES ( > 'transient_lastDdlTi
[jira] [Updated] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21911: -- Status: Patch Available (was: Open) [~odraese], [~asinkovits], [~szita]: Could you please review? Thanks, Peter > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21911: -- Attachment: HIVE-21911.patch > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21911.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21911: -- Description: We need to have a way to plug in different listeners which act upon the LlapDaemon statistics. This listener should be able to disable / resize the LlapDaemons based on health data. was: We need to have a way to plug in different handlers which act upon the LlapDaemon statistics. This handler should be able to disable / resize the LlapDaemons based on health data. > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > We need to have a way to plug in different listeners which act upon the > LlapDaemon statistics. > This listener should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-21911: -- Labels: pull-request-available (was: ) > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > > We need to have a way to plug in different handlers which act upon the > LlapDaemon statistics. > This handler should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?focusedWorklogId=268461&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268461 ] ASF GitHub Bot logged work on HIVE-21911: - Author: ASF GitHub Bot Created on: 27/Jun/19 11:34 Start Date: 27/Jun/19 11:34 Worklog Time Spent: 10m Work Description: pvary commented on pull request #691: HIVE-21911: Pluggable LlapMetricsListener on Tez side to disable / resize Daemons URL: https://github.com/apache/hive/pull/691 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 268461) Time Spent: 10m Remaining Estimate: 0h > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > We need to have a way to plug in different handlers which act upon the > LlapDaemon statistics. > This handler should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21911: -- Summary: Pluggable LlapMetricsListener on Tez side to disable / resize Daemons (was: Pluggable DaemonStatisticsHandler on Tez side to disable / resize Daemons) > Pluggable LlapMetricsListener on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > > We need to have a way to plug in different handlers which act upon the > LlapDaemon statistics. > This handler should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16874017#comment-16874017 ] Hive QA commented on HIVE-21637: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 39s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} storage-api in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 46s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 22s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 20s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} beeline in master has 44 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} standalone-metastore/metastore-tools/metastore-benchmarks in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 49s{color} | {color:blue} itests/util in master has 44 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} storage-api: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} standalone-metastore/metastore-common: The patch generated 9 new + 498 unchanged - 2 fixed = 507 total (was 500) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 51s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 160 new + 2193 unchanged - 65 fixed = 2353 total (was 2258) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 53s{color} | {color:red} ql: The patch generated 10 new + 970 unchanged - 2 fixed = 980 total (was 972) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} standalone-metastore/metastore-tools/tools-common: The patch generated 5 new + 31 unchanged - 0 fixed = 36 total (was 31) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} itests/hcatalog-unit: The patch generated 2 new + 24 unchanged - 3 fixed = 26 total (was 27) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s{color} | {color:red} itests/hive-unit: The patch generated 3 new + 163 unchanged - 1 fixed = 166 total (was 164) {color} | | {color:red}-1{color} | {color:red} checkstyle
[jira] [Assigned] (HIVE-21911) Pluggable DaemonStatisticsHandler on Tez side to disable / resize Daemons
[ https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary reassigned HIVE-21911: - Assignee: Peter Vary > Pluggable DaemonStatisticsHandler on Tez side to disable / resize Daemons > - > > Key: HIVE-21911 > URL: https://issues.apache.org/jira/browse/HIVE-21911 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > > We need to have a way to plug in different handlers which act upon the > LlapDaemon statistics. > This handler should be able to disable / resize the LlapDaemons based on > health data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16873968#comment-16873968 ] Hive QA commented on HIVE-21928: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973043/HIVE-21928.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 114 failed/errored test(s), 16341 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_join_pkfk] (batchId=15) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer9] (batchId=7) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_join_breaktask] (batchId=82) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_12] (batchId=1) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[partitions_filter_default] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_gby_join] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join2] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join3] (batchId=20) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_outer_join4] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_exists] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_unqualcolumnrefs] (batchId=20) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join3] (batchId=36) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_join29] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_smb_mapjoin_14] (batchId=177) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=180) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_9] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[convert_decimal64_to_decimal] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[filter_join_breaktask] (batchId=181) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join46] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mapjoin46] (batchId=176) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_llap] (batchId=173) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[semijoin6] (batchId=184) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[semijoin7] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[skewjoin] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_mapjoin_14] (batchId=177) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in] (batchId=178) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in_having] (batchId=176) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_scalar] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=183) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_2] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1] (batchId=181) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_2] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_left_outer_join] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_nullsafe_join] (batchId=185) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_outer_join0] (batchId=177) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_outer_join1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_outer_join2] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_gby2] (batchId=178) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_gby] (batchId=182) org.apache.hadoop.hive.cli.TestM
[jira] [Updated] (HIVE-21846) Create a thread in TezAM which periodically fetches LlapDaemon metrics
[ https://issues.apache.org/jira/browse/HIVE-21846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21846: -- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks [~asinkovits] for the patch! > Create a thread in TezAM which periodically fetches LlapDaemon metrics > -- > > Key: HIVE-21846 > URL: https://issues.apache.org/jira/browse/HIVE-21846 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Antal Sinkovits >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21846.01.patch, HIVE-21846.02.patch, > HIVE-21846.03.patch > > Time Spent: 1h 50m > Remaining Estimate: 0h > > LlapTaskSchedulerService should start a thread which periodically fetches the > LlapDaemon metrics and stores them in the NodeInfo object. > This should be just the first implementation - later we should find a way > where we do not need NxM requests between N TezAM and M LlapDaemon -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity
[ https://issues.apache.org/jira/browse/HIVE-21907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21907: -- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks for the review [~odraese] and [~szita]! > Add a new LlapDaemon Management API method to set the daemon capacity > - > > Key: HIVE-21907 > URL: https://issues.apache.org/jira/browse/HIVE-21907 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21907.2.patch, HIVE-21907.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Add a new method to LlapManagementProtocol API which can disable an Llap node. > It would be even better, if we can dynamically set the number of executors > and the size of the wait queue. This way we can disable the node setting them > to 0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16873949#comment-16873949 ] Peter Vary commented on HIVE-21910: --- [~odraese], [~asinkovits], [~szita]: Could you please review? Thanks, Peter > Multiple target location generation in HostAffinitySplitLocationProvider > > > Key: HIVE-21910 > URL: https://issues.apache.org/jira/browse/HIVE-21910 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21910.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We need to generate multiple target locations by > HostAffinitySplitLocationProvider, so we will have deterministic fallback > nodes in case the target node is disabled -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider
[ https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21910: -- Status: Patch Available (was: Open) > Multiple target location generation in HostAffinitySplitLocationProvider > > > Key: HIVE-21910 > URL: https://issues.apache.org/jira/browse/HIVE-21910 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21910.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We need to generate multiple target locations by > HostAffinitySplitLocationProvider, so we will have deterministic fallback > nodes in case the target node is disabled -- This message was sent by Atlassian JIRA (v7.6.3#76005)