[jira] [Updated] (HIVE-17824) msck repair table should drop the missing partitions from metastore
[ https://issues.apache.org/jira/browse/HIVE-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-17824: --- Attachment: HIVE-17824.01-branch-2.patch > msck repair table should drop the missing partitions from metastore > --- > > Key: HIVE-17824 > URL: https://issues.apache.org/jira/browse/HIVE-17824 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-17824-branch-2.01.patch, > HIVE-17824.01-branch-2.patch, HIVE-17824.1.patch, HIVE-17824.2.patch, > HIVE-17824.3.patch, HIVE-17824.4.patch > > > {{msck repair table }} is often used in environments where the new > partitions are loaded as directories on HDFS or S3 and users want to create > the missing partitions in bulk. However, currently it only supports addition > of missing partitions. If there are any partitions which are present in > metastore but not on the FileSystem, it should also delete them so that it > truly repairs the table metadata. > We should be careful not to break backwards compatibility so we should either > introduce a new config or keyword to add support to delete unnecessary > partitions from the metastore. This way users who want the old behavior can > easily turn it off. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19228) Remove commons-httpclient 3.x usage
[ https://issues.apache.org/jira/browse/HIVE-19228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460099#comment-16460099 ] Aihua Xu commented on HIVE-19228: - +1 for patch-4 pending on the test. > Remove commons-httpclient 3.x usage > --- > > Key: HIVE-19228 > URL: https://issues.apache.org/jira/browse/HIVE-19228 > Project: Hive > Issue Type: Improvement >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-19228.1.patch, HIVE-19228.2.patch, > HIVE-19228.3.patch, HIVE-19228.4.patch > > > Commons-httpclient is not supported well anymore. Remove dependency and move > to Apache HTTP client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)
[ https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460093#comment-16460093 ] Sergey Shelukhin commented on HIVE-19258: - Same for HiveQA for now. > add originals support to MM tables (and make the conversion a metadata only > operation) > -- > > Key: HIVE-19258 > URL: https://issues.apache.org/jira/browse/HIVE-19258 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19258.01.patch, HIVE-19258.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)
[ https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19258: Attachment: HIVE-19258.01.patch > add originals support to MM tables (and make the conversion a metadata only > operation) > -- > > Key: HIVE-19258 > URL: https://issues.apache.org/jira/browse/HIVE-19258 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19258.01.patch, HIVE-19258.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19362) enable LLAP cache affinity by default
[ https://issues.apache.org/jira/browse/HIVE-19362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19362: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed. Thanks for the review! > enable LLAP cache affinity by default > - > > Key: HIVE-19362 > URL: https://issues.apache.org/jira/browse/HIVE-19362 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19362.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19363) remove cryptic metrics from LLAP IO output
[ https://issues.apache.org/jira/browse/HIVE-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19363: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed. Thanks for the review > remove cryptic metrics from LLAP IO output > -- > > Key: HIVE-19363 > URL: https://issues.apache.org/jira/browse/HIVE-19363 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19363.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19311) Partition and bucketing support for “load data” statement
[ https://issues.apache.org/jira/browse/HIVE-19311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460073#comment-16460073 ] Prasanth Jayachandran commented on HIVE-19311: -- One minor comment in RB. Looks good otherwise. > Partition and bucketing support for “load data” statement > - > > Key: HIVE-19311 > URL: https://issues.apache.org/jira/browse/HIVE-19311 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19311.1.patch, HIVE-19311.2.patch, > HIVE-19311.3.patch, HIVE-19311.4.patch, HIVE-19311.5.patch, > HIVE-19311.6.patch, HIVE-19311.7.patch, HIVE-19311.8.patch, HIVE-19311.9.patch > > > Currently, "load data" statement is very limited. It errors out if any of the > information is missing such as partitioning info if table is partitioned or > appropriate names when table is bucketed. > It should be able to launch an insert job to load the data instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19311) Partition and bucketing support for “load data” statement
[ https://issues.apache.org/jira/browse/HIVE-19311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-19311: -- Attachment: HIVE-19311.9.patch > Partition and bucketing support for “load data” statement > - > > Key: HIVE-19311 > URL: https://issues.apache.org/jira/browse/HIVE-19311 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19311.1.patch, HIVE-19311.2.patch, > HIVE-19311.3.patch, HIVE-19311.4.patch, HIVE-19311.5.patch, > HIVE-19311.6.patch, HIVE-19311.7.patch, HIVE-19311.8.patch, HIVE-19311.9.patch > > > Currently, "load data" statement is very limited. It errors out if any of the > information is missing such as partitioning info if table is partitioned or > appropriate names when table is bucketed. > It should be able to launch an insert job to load the data instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19365) Index on COMPLETED_TXN_COMPONENTS in Metastore RDBMS has different names in different scripts
[ https://issues.apache.org/jira/browse/HIVE-19365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460063#comment-16460063 ] Alan Gates commented on HIVE-19365: --- Patch committed to master. > Index on COMPLETED_TXN_COMPONENTS in Metastore RDBMS has different names in > different scripts > - > > Key: HIVE-19365 > URL: https://issues.apache.org/jira/browse/HIVE-19365 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-19365.patch > > > In mysql and mssql install scripts the index is called > COMPLETED_TXN_COMPONENTS_IDX2 Everywhere else it is called > COMPLETED_TXN_COMPONENTS_IDX, which is breaking upgrade scripts for 3.0 to > 3.1 since they don't know which index to update. One name should be chosen > and used everywhere. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19378) "hive.lock.numretries" Is Misleading
[ https://issues.apache.org/jira/browse/HIVE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460059#comment-16460059 ] BELUGA BEHR commented on HIVE-19378: Same for 'hive.unlock.numretries' > "hive.lock.numretries" Is Misleading > > > Key: HIVE-19378 > URL: https://issues.apache.org/jira/browse/HIVE-19378 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Priority: Minor > > Configuration 'hive.lock.numretries' is confusing. It's not actually a > 'retry' count, it's the total number of attempt to try: > > {code:java|title=ZooKeeperHiveLockManager.java} > do { > lastException = null; > tryNum++; > try { > if (tryNum > 1) { > Thread.sleep(sleepTime); > prepareRetry(); > } > ret = lockPrimitive(key, mode, keepAlive, parentCreated, > conflictingLocks); > ... > } while (tryNum < numRetriesForLock); > {code} > So, from this code you can see that on the first loop, {{tryNum}} is set to > 1, in which case, if the configuration num*retries* is set to 1, there will > be one attempt total. With a *retry* value of 1, I would assume one initial > attempt and one additional retry. Please change to: > {code} > while (tryNum <= numRetriesForLock); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19354) from_utc_timestamp returns incorrect results for datetime values with timezone
[ https://issues.apache.org/jira/browse/HIVE-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-19354: Status: Patch Available (was: Open) > from_utc_timestamp returns incorrect results for datetime values with timezone > -- > > Key: HIVE-19354 > URL: https://issues.apache.org/jira/browse/HIVE-19354 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Bruce Robbins >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19354.01.patch > > > On the master branch, from_utc_timestamp returns incorrect results for > datetime strings that contain a timezone: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > 2000-10-09 10:00:00 > Time taken: 0.294 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.121 seconds, Fetched: 1 row(s) > hive> > {noformat} > Both inputs are 2000-10-10 00:00:00 in UTC time, but I got two different > results. > In version 2.3.3, from_utc_timestamp doesn't accept timezones in its input > strings, so it does not have this bug: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > NULL > Time taken: 5.152 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.069 seconds, Fetched: 1 row(s) > hive> > {noformat} > Since the function is expecting a UTC datetime value, it probably should > continue to reject input that contains a timezone component. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19335) Disable runtime filtering (semijoin reduction opt with bloomfilter) for external tables
[ https://issues.apache.org/jira/browse/HIVE-19335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460049#comment-16460049 ] Hive QA commented on HIVE-19335: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 58s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 20m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10608/dev-support/hive-personality.sh | | git revision | master / c8f0513 | | Default Java | 1.8.0_111 | | modules | C: common itests ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10608/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Disable runtime filtering (semijoin reduction opt with bloomfilter) for > external tables > --- > > Key: HIVE-19335 > URL: https://issues.apache.org/jira/browse/HIVE-19335 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19335.1.patch > > > Even with good stats runtime filtering can cause issues, if they are out of > date things are even worse. Disable by default for external tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19354) from_utc_timestamp returns incorrect results for datetime values with timezone
[ https://issues.apache.org/jira/browse/HIVE-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460046#comment-16460046 ] Bharathkrishna Guruvayoor Murali commented on HIVE-19354: - These are the outputs expected after the changes in this patch: {code:java} 0: jdbc:hive2://localhost:1/default> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); [..] ++ | _c0 | ++ | 2000-10-09 17:00:00.0 | ++ 1 row selected (0.25 seconds) 0: jdbc:hive2://localhost:1/default> select from_utc_timestamp('2000-10-10 00:00:00+00:00', 'America/Los_Angeles'); [..] ++ | _c0 | ++ | 2000-10-09 17:00:00.0 | ++ 1 row selected (0.106 seconds) 0: jdbc:hive2://localhost:1/default> select from_utc_timestamp('2000-10-10 00:00:00+03:00', 'America/Los_Angeles'); [..] ++ | _c0 | ++ | 2000-10-09 14:00:00.0 | ++ 1 row selected (0.11 seconds) {code} Observe that output of 1st and 2nd query are the same. For the 3rd query, it is interpreted as : 2000-10-10 00:00:00+03:00 in UTC is 2000-10-09 21:00 (because our input is expected to be in UTC , hence +03:00 means utc plus 3 hours). Hence, when we convert it to America/Los_Angeles timezone( which is UTC - 7), it will be 2000-10-09 14:00:00.0 > from_utc_timestamp returns incorrect results for datetime values with timezone > -- > > Key: HIVE-19354 > URL: https://issues.apache.org/jira/browse/HIVE-19354 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Bruce Robbins >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19354.01.patch > > > On the master branch, from_utc_timestamp returns incorrect results for > datetime strings that contain a timezone: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > 2000-10-09 10:00:00 > Time taken: 0.294 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.121 seconds, Fetched: 1 row(s) > hive> > {noformat} > Both inputs are 2000-10-10 00:00:00 in UTC time, but I got two different > results. > In version 2.3.3, from_utc_timestamp doesn't accept timezones in its input > strings, so it does not have this bug: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > NULL > Time taken: 5.152 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.069 seconds, Fetched: 1 row(s) > hive> > {noformat} > Since the function is expecting a UTC datetime value, it probably should > continue to reject input that contains a timezone component. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19336) Disable SMB/Bucketmap join for external tables
[ https://issues.apache.org/jira/browse/HIVE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere reassigned HIVE-19336: - Assignee: Jason Dere > Disable SMB/Bucketmap join for external tables > -- > > Key: HIVE-19336 > URL: https://issues.apache.org/jira/browse/HIVE-19336 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19336.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19336) Disable SMB/Bucketmap join for external tables
[ https://issues.apache.org/jira/browse/HIVE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19336: -- Attachment: HIVE-19336.1.patch > Disable SMB/Bucketmap join for external tables > -- > > Key: HIVE-19336 > URL: https://issues.apache.org/jira/browse/HIVE-19336 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19336.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19336) Disable SMB/Bucketmap join for external tables
[ https://issues.apache.org/jira/browse/HIVE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460043#comment-16460043 ] Jason Dere commented on HIVE-19336: --- RB at https://reviews.apache.org/r/66887/ > Disable SMB/Bucketmap join for external tables > -- > > Key: HIVE-19336 > URL: https://issues.apache.org/jira/browse/HIVE-19336 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Priority: Major > Attachments: HIVE-19336.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460039#comment-16460039 ] Alan Gates commented on HIVE-19135: --- Patch 4 committed to master. [~vgarg] I would like to commit this to branch-3 as it is intended for use as part of Hive 2->3 upgrades. > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19135.2.patch, HIVE-19135.3.patch, > HIVE-19135.4.patch, HIVE19135.patch > > > As part of upgrading to Hive 3 admins may wish to create new catalogs and > move some existing databases into those catalogs. We can do this by adding > options to schematool. This guarantees that only admins can do these > operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19354) from_utc_timestamp returns incorrect results for datetime values with timezone
[ https://issues.apache.org/jira/browse/HIVE-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460026#comment-16460026 ] Bruce Robbins commented on HIVE-19354: -- [~bharos92] Also, I bet to_utc_timestamp has a similar issue on the master branch (compared to a released version), although I have not tested it out. > from_utc_timestamp returns incorrect results for datetime values with timezone > -- > > Key: HIVE-19354 > URL: https://issues.apache.org/jira/browse/HIVE-19354 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Bruce Robbins >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19354.01.patch > > > On the master branch, from_utc_timestamp returns incorrect results for > datetime strings that contain a timezone: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > 2000-10-09 10:00:00 > Time taken: 0.294 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.121 seconds, Fetched: 1 row(s) > hive> > {noformat} > Both inputs are 2000-10-10 00:00:00 in UTC time, but I got two different > results. > In version 2.3.3, from_utc_timestamp doesn't accept timezones in its input > strings, so it does not have this bug: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > NULL > Time taken: 5.152 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.069 seconds, Fetched: 1 row(s) > hive> > {noformat} > Since the function is expecting a UTC datetime value, it probably should > continue to reject input that contains a timezone component. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19304) Update templates.py based on config changes in YARN-7142 and YARN-8122
[ https://issues.apache.org/jira/browse/HIVE-19304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19304: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Committed to master. Thanks for the patch! > Update templates.py based on config changes in YARN-7142 and YARN-8122 > -- > > Key: HIVE-19304 > URL: https://issues.apache.org/jira/browse/HIVE-19304 > Project: Hive > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19304.001.patch, HIVE-19304.01.patch > > > Now that YARN-7142 is committed and YARN-8122 will be committed soon, we need > to update templates.py based on config changes for placement policy and > health threshold monitor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables
[ https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460018#comment-16460018 ] Prasanth Jayachandran commented on HIVE-19327: -- looks good to me too. > qroupby_rollup_empty.q fails for insert-only transactional tables > - > > Key: HIVE-19327 > URL: https://issues.apache.org/jira/browse/HIVE-19327 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19327.01.patch, HIVE-19327.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables
[ https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460008#comment-16460008 ] Steve Yeom commented on HIVE-19327: --- Wow. Thanks [~sershe]. For your review! > qroupby_rollup_empty.q fails for insert-only transactional tables > - > > Key: HIVE-19327 > URL: https://issues.apache.org/jira/browse/HIVE-19327 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19327.01.patch, HIVE-19327.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19120) catalog not properly set for some tables in SQL upgrade scripts
[ https://issues.apache.org/jira/browse/HIVE-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460007#comment-16460007 ] Hive QA commented on HIVE-19120: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12921331/HIVE-19120.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 34 failed/errored test(s), 14270 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10605/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10605/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10605/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 34 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12921331 - PreCommit-HIVE-Build > catalog not properly set for some tables in SQL upgrade scripts > --- > > Key: HIVE-19120 > URL: https://issues.apache.org/jira/browse/HIVE-19120 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19120.patch > > > A catalog column
[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables
[ https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460005#comment-16460005 ] Sergey Shelukhin commented on HIVE-19327: - Sorry missed the previous update. +1 [~prasanth_j] do you have any other comments? > qroupby_rollup_empty.q fails for insert-only transactional tables > - > > Key: HIVE-19327 > URL: https://issues.apache.org/jira/browse/HIVE-19327 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19327.01.patch, HIVE-19327.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19371) Add table ownerType to HMS thrift API
[ https://issues.apache.org/jira/browse/HIVE-19371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-19371: --- Fix Version/s: 3.1.0 Status: Patch Available (was: In Progress) > Add table ownerType to HMS thrift API > - > > Key: HIVE-19371 > URL: https://issues.apache.org/jira/browse/HIVE-19371 > Project: Hive > Issue Type: Sub-task > Components: Hive, Metastore >Affects Versions: 3.0.0 >Reporter: Sergio Peña >Assignee: Sergio Peña >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19371.1.patch > > > Subtasks that adds the ownerType field to the Table object of the HMS Thrift > API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19371) Add table ownerType to HMS thrift API
[ https://issues.apache.org/jira/browse/HIVE-19371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-19371: --- Attachment: HIVE-19371.1.patch > Add table ownerType to HMS thrift API > - > > Key: HIVE-19371 > URL: https://issues.apache.org/jira/browse/HIVE-19371 > Project: Hive > Issue Type: Sub-task > Components: Hive, Metastore >Affects Versions: 3.0.0 >Reporter: Sergio Peña >Assignee: Sergio Peña >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19371.1.patch > > > Subtasks that adds the ownerType field to the Table object of the HMS Thrift > API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables
[ https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1645#comment-1645 ] Steve Yeom commented on HIVE-19327: --- Hi [~prasanth_j] the 3 failed tests of age 1 are clear in my environment. Could you look at the patch 2? Thanks, Steve. > qroupby_rollup_empty.q fails for insert-only transactional tables > - > > Key: HIVE-19327 > URL: https://issues.apache.org/jira/browse/HIVE-19327 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19327.01.patch, HIVE-19327.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19354) from_utc_timestamp returns incorrect results for datetime values with timezone
[ https://issues.apache.org/jira/browse/HIVE-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459987#comment-16459987 ] Bruce Robbins commented on HIVE-19354: -- [~bharos92] Would it be better to reject the input, since from_utc_timestamp already has a fixed timezone for its input (UTC)? Specifying another timezone in the input might not make sense. I'm no expert, just weighing in. > from_utc_timestamp returns incorrect results for datetime values with timezone > -- > > Key: HIVE-19354 > URL: https://issues.apache.org/jira/browse/HIVE-19354 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Bruce Robbins >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19354.01.patch > > > On the master branch, from_utc_timestamp returns incorrect results for > datetime strings that contain a timezone: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > 2000-10-09 10:00:00 > Time taken: 0.294 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.121 seconds, Fetched: 1 row(s) > hive> > {noformat} > Both inputs are 2000-10-10 00:00:00 in UTC time, but I got two different > results. > In version 2.3.3, from_utc_timestamp doesn't accept timezones in its input > strings, so it does not have this bug: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > NULL > Time taken: 5.152 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.069 seconds, Fetched: 1 row(s) > hive> > {noformat} > Since the function is expecting a UTC datetime value, it probably should > continue to reject input that contains a timezone component. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19334) Use actual file size rather than stats for fetch task optimization with external tables
[ https://issues.apache.org/jira/browse/HIVE-19334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459982#comment-16459982 ] Jason Dere commented on HIVE-19334: --- uploading same patch to get precommit tests to run. > Use actual file size rather than stats for fetch task optimization with > external tables > --- > > Key: HIVE-19334 > URL: https://issues.apache.org/jira/browse/HIVE-19334 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19334.1.patch, HIVE-19334.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19334) Use actual file size rather than stats for fetch task optimization with external tables
[ https://issues.apache.org/jira/browse/HIVE-19334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19334: -- Attachment: HIVE-19334.2.patch > Use actual file size rather than stats for fetch task optimization with > external tables > --- > > Key: HIVE-19334 > URL: https://issues.apache.org/jira/browse/HIVE-19334 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19334.1.patch, HIVE-19334.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19354) from_utc_timestamp returns incorrect results for datetime values with timezone
[ https://issues.apache.org/jira/browse/HIVE-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459981#comment-16459981 ] Bharathkrishna Guruvayoor Murali commented on HIVE-19354: - I made a fix for this issue and uploading the patch to run precommit tests. The issue here is that when we pass a UTC string with timezone, in the line [https://github.com/apache/hive/blob/41de95318d80df282fbed17ede6b3a05f649cce9/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java#L1264] the Timestamp.from() method ignores the timezone as timestamp does not really have a concept of timezone. Hence, the change in my patch is to change the method to accept a ZoneId, and if ZoneId is present, convert the Instant to a LocalDateTime which will represent the date and time according to the ZoneId. Now, when we convert it to TimeStamp, it will be in accordance with the timezone we need. > from_utc_timestamp returns incorrect results for datetime values with timezone > -- > > Key: HIVE-19354 > URL: https://issues.apache.org/jira/browse/HIVE-19354 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Bruce Robbins >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19354.01.patch > > > On the master branch, from_utc_timestamp returns incorrect results for > datetime strings that contain a timezone: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > 2000-10-09 10:00:00 > Time taken: 0.294 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.121 seconds, Fetched: 1 row(s) > hive> > {noformat} > Both inputs are 2000-10-10 00:00:00 in UTC time, but I got two different > results. > In version 2.3.3, from_utc_timestamp doesn't accept timezones in its input > strings, so it does not have this bug: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > NULL > Time taken: 5.152 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.069 seconds, Fetched: 1 row(s) > hive> > {noformat} > Since the function is expecting a UTC datetime value, it probably should > continue to reject input that contains a timezone component. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19354) from_utc_timestamp returns incorrect results for datetime values with timezone
[ https://issues.apache.org/jira/browse/HIVE-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-19354: Attachment: HIVE-19354.01.patch > from_utc_timestamp returns incorrect results for datetime values with timezone > -- > > Key: HIVE-19354 > URL: https://issues.apache.org/jira/browse/HIVE-19354 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Bruce Robbins >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19354.01.patch > > > On the master branch, from_utc_timestamp returns incorrect results for > datetime strings that contain a timezone: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > 2000-10-09 10:00:00 > Time taken: 0.294 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.121 seconds, Fetched: 1 row(s) > hive> > {noformat} > Both inputs are 2000-10-10 00:00:00 in UTC time, but I got two different > results. > In version 2.3.3, from_utc_timestamp doesn't accept timezones in its input > strings, so it does not have this bug: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > NULL > Time taken: 5.152 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.069 seconds, Fetched: 1 row(s) > hive> > {noformat} > Since the function is expecting a UTC datetime value, it probably should > continue to reject input that contains a timezone component. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19354) from_utc_timestamp returns incorrect results for datetime values with timezone
[ https://issues.apache.org/jira/browse/HIVE-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali reassigned HIVE-19354: --- Assignee: Bharathkrishna Guruvayoor Murali > from_utc_timestamp returns incorrect results for datetime values with timezone > -- > > Key: HIVE-19354 > URL: https://issues.apache.org/jira/browse/HIVE-19354 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Bruce Robbins >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > > On the master branch, from_utc_timestamp returns incorrect results for > datetime strings that contain a timezone: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > 2000-10-09 10:00:00 > Time taken: 0.294 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.121 seconds, Fetched: 1 row(s) > hive> > {noformat} > Both inputs are 2000-10-10 00:00:00 in UTC time, but I got two different > results. > In version 2.3.3, from_utc_timestamp doesn't accept timezones in its input > strings, so it does not have this bug: > {noformat} > hive> select from_utc_timestamp('2000-10-10 00:00:00+00:00', > 'America/Los_Angeles'); > OK > NULL > Time taken: 5.152 seconds, Fetched: 1 row(s) > hive> select from_utc_timestamp('2000-10-10 00:00:00', 'America/Los_Angeles'); > OK > 2000-10-09 17:00:00 > Time taken: 0.069 seconds, Fetched: 1 row(s) > hive> > {noformat} > Since the function is expecting a UTC datetime value, it probably should > continue to reject input that contains a timezone component. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19376) Statistics: switch to 10bit HLL by default for Hive
[ https://issues.apache.org/jira/browse/HIVE-19376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459955#comment-16459955 ] Prasanth Jayachandran commented on HIVE-19376: -- +1 > Statistics: switch to 10bit HLL by default for Hive > --- > > Key: HIVE-19376 > URL: https://issues.apache.org/jira/browse/HIVE-19376 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > Attachments: HIVE-19376.1.patch > > > This reduces the memory usage for the metastore cache and the size of > bit-vectors in the DB by 16x. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19369) Locks: Add new lock implementations for always zero-wait readers
[ https://issues.apache.org/jira/browse/HIVE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459938#comment-16459938 ] Gopal V commented on HIVE-19369: That means the 2nd part of that is a codepath which already exists (i.e drop_excl -> fail readers), which further reduces the code that is necessary for this ticket. > Locks: Add new lock implementations for always zero-wait readers > > > Key: HIVE-19369 > URL: https://issues.apache.org/jira/browse/HIVE-19369 > Project: Hive > Issue Type: Improvement >Reporter: Gopal V >Priority: Major > > Hive Locking with Micro-managed and full-ACID tables needs a better locking > implementation which allows for no-wait readers always. > EXCL_DROP > EXCL_WRITE > SHARED_WRITE > SHARED_READ > Short write-up > EXCL_DROP is a "drop partition" or "drop table" and waits for all others to > exit > EXCL_WRITE excludes all writes and will wait for all existing SHARED_WRITE to > exit. > SHARED_WRITE allows all SHARED_WRITES to go through, but will wait for an > EXCL_WRITE & EXCL_DROP (waiting so that you can do drop + insert in different > threads). > SHARED_READ does not wait for any lock - it fails fast for a pending > EXCL_DROP, because even if there is an EXCL_WRITE or SHARED_WRITE pending, > there's no semantic reason to wait for them to succeed before going ahead > with a SHARED_WRITE. > a select * => SHARED_READ > an insert into => SHARED_WRITE > an insert overwrite or MERGE => EXCL_WRITE > a drop table => EXCL_DROP > TODO: > The fate of the compactor needs to be added to this before it is a complete > description. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-6980) Drop table by using direct sql
[ https://issues.apache.org/jira/browse/HIVE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459928#comment-16459928 ] Alexander Kolbasov commented on HIVE-6980: -- Do you know which part consumes so much time before the fix? > Drop table by using direct sql > -- > > Key: HIVE-6980 > URL: https://issues.apache.org/jira/browse/HIVE-6980 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 0.12.0 >Reporter: Selina Zhang >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-6980.2.patch, HIVE-6980.patch > > > Dropping table which has lots of partitions is slow. Even after applying the > patch of HIVE-6265, the drop table still takes hours (100K+ partitions). > The fixes come with two parts: > 1. use directSQL to query the partitions protect mode; > the current implementation needs to transfer the Partition object to client > and check the protect mode for each partition. I'd like to move this part of > logic to metastore. The check will be done by direct sql (if direct sql is > disabled, execute the same logic in the ObjectStore); > 2. use directSQL to drop partitions for table; > there maybe two solutions here: > 1. add "DELETE CASCADE" in the schema. In this way we only need to delete > entries from partitions table use direct sql. May need to change > datanucleus.deletionPolicy = DataNucleus. > 2. clean up the dependent tables by issue DELETE statement. This also needs > to turn on datanucleus.query.sql.allowAll > Both of above solutions should be able to fix the problem. The DELETE CASCADE > has to change schemas and prepare upgrade scripts. The second solutions added > maintenance cost if new tables added in the future releases. > Please advice. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19369) Locks: Add new lock implementations for always zero-wait readers
[ https://issues.apache.org/jira/browse/HIVE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459921#comment-16459921 ] Eugene Koifman commented on HIVE-19369: --- if you want readers to die rather than wait you can just set the lock acquisition retry count to 0 > Locks: Add new lock implementations for always zero-wait readers > > > Key: HIVE-19369 > URL: https://issues.apache.org/jira/browse/HIVE-19369 > Project: Hive > Issue Type: Improvement >Reporter: Gopal V >Priority: Major > > Hive Locking with Micro-managed and full-ACID tables needs a better locking > implementation which allows for no-wait readers always. > EXCL_DROP > EXCL_WRITE > SHARED_WRITE > SHARED_READ > Short write-up > EXCL_DROP is a "drop partition" or "drop table" and waits for all others to > exit > EXCL_WRITE excludes all writes and will wait for all existing SHARED_WRITE to > exit. > SHARED_WRITE allows all SHARED_WRITES to go through, but will wait for an > EXCL_WRITE & EXCL_DROP (waiting so that you can do drop + insert in different > threads). > SHARED_READ does not wait for any lock - it fails fast for a pending > EXCL_DROP, because even if there is an EXCL_WRITE or SHARED_WRITE pending, > there's no semantic reason to wait for them to succeed before going ahead > with a SHARED_WRITE. > a select * => SHARED_READ > an insert into => SHARED_WRITE > an insert overwrite or MERGE => EXCL_WRITE > a drop table => EXCL_DROP > TODO: > The fate of the compactor needs to be added to this before it is a complete > description. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18570) ACID IOW implemented using base may delete too much data
[ https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459916#comment-16459916 ] Eugene Koifman commented on HIVE-18570: --- patch 2 fixes TestDbTxnManager2.testShowTablesLock which was not caused by these changes but was surfaced by them (the drop table command at start of test was wrong). > ACID IOW implemented using base may delete too much data > > > Key: HIVE-18570 > URL: https://issues.apache.org/jira/browse/HIVE-18570 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Eugene Koifman >Priority: Blocker > Attachments: HIVE-18570.01-branch-3.patch, HIVE-18570.01.patch, > HIVE-18570.02-branch-3.patch > > > Suppose we have a table with delta_0 insert data. > Txn 1 starts an insert into delta_1. > Txn 2 starts an IOW into base_2. > Txn 2 commits. > Txn 1 commits after txn 2 but its results would be invisible. > Txn 2 deletes rows committed by txn 1 that according to standard ACID > semantics it could have never observed and affected; this sequence of events > is only possible under read-uncommitted isolation level (so, 2 deletes rows > written by 1 before 1 commits them). > This is if we look at IOW as transactional delete+insert. Otherwise we are > just saying IOW performs "semi"-transactional delete. > If 1 ran an update on rows instead of an insert, and 2 still ran an > IOW/delete, row lock conflict (or equivalent) should cause one of them to > fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19337) Partition whitelist regex doesn't work (and never did)
[ https://issues.apache.org/jira/browse/HIVE-19337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459917#comment-16459917 ] Alexander Kolbasov commented on HIVE-19337: --- Attaching the same patch - for some reason it failed to apply the first time. > Partition whitelist regex doesn't work (and never did) > -- > > Key: HIVE-19337 > URL: https://issues.apache.org/jira/browse/HIVE-19337 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.3 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-19337.01.branch-2.patch, > HIVE-19337.02.branch-2.patch > > > {{ObjectStore.setConf()}} has the following code: > {code:java} > String partitionValidationRegex = > > hiveConf.get(HiveConf.ConfVars.METASTORE_PARTITION_NAME_WHITELIST_PATTERN.name()); > {code} > Note that it uses name() method which returns enum name > (METASTORE_PARTITION_NAME_WHITELIST_PATTERN) rather then .varname > As a result the regex will always be null. > The code was introduced as part of > HIVE-7223 Support generic PartitionSpecs in Metastore partition-functions > So looks like this was broken since the original code drop. This is fixed in > Hive3 - probably when [~alangates] reworked access to configuration > (HIVE-17733) so it isn't a bug in Hive-3. > [~stakiar_impala_496e] FYI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19337) Partition whitelist regex doesn't work (and never did)
[ https://issues.apache.org/jira/browse/HIVE-19337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-19337: -- Attachment: HIVE-19337.02.branch-2.patch > Partition whitelist regex doesn't work (and never did) > -- > > Key: HIVE-19337 > URL: https://issues.apache.org/jira/browse/HIVE-19337 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.3 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-19337.01.branch-2.patch, > HIVE-19337.02.branch-2.patch > > > {{ObjectStore.setConf()}} has the following code: > {code:java} > String partitionValidationRegex = > > hiveConf.get(HiveConf.ConfVars.METASTORE_PARTITION_NAME_WHITELIST_PATTERN.name()); > {code} > Note that it uses name() method which returns enum name > (METASTORE_PARTITION_NAME_WHITELIST_PATTERN) rather then .varname > As a result the regex will always be null. > The code was introduced as part of > HIVE-7223 Support generic PartitionSpecs in Metastore partition-functions > So looks like this was broken since the original code drop. This is fixed in > Hive3 - probably when [~alangates] reworked access to configuration > (HIVE-17733) so it isn't a bug in Hive-3. > [~stakiar_impala_496e] FYI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18570) ACID IOW implemented using base may delete too much data
[ https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18570: -- Attachment: HIVE-18570.02-branch-3.patch > ACID IOW implemented using base may delete too much data > > > Key: HIVE-18570 > URL: https://issues.apache.org/jira/browse/HIVE-18570 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Eugene Koifman >Priority: Blocker > Attachments: HIVE-18570.01-branch-3.patch, HIVE-18570.01.patch, > HIVE-18570.02-branch-3.patch > > > Suppose we have a table with delta_0 insert data. > Txn 1 starts an insert into delta_1. > Txn 2 starts an IOW into base_2. > Txn 2 commits. > Txn 1 commits after txn 2 but its results would be invisible. > Txn 2 deletes rows committed by txn 1 that according to standard ACID > semantics it could have never observed and affected; this sequence of events > is only possible under read-uncommitted isolation level (so, 2 deletes rows > written by 1 before 1 commits them). > This is if we look at IOW as transactional delete+insert. Otherwise we are > just saying IOW performs "semi"-transactional delete. > If 1 ran an update on rows instead of an insert, and 2 still ran an > IOW/delete, row lock conflict (or equivalent) should cause one of them to > fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19317) Handle schema evolution from int like types to decimal
[ https://issues.apache.org/jira/browse/HIVE-19317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-19317: --- Attachment: HIVE-19317.4.patch > Handle schema evolution from int like types to decimal > -- > > Key: HIVE-19317 > URL: https://issues.apache.org/jira/browse/HIVE-19317 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-19317.1.patch, HIVE-19317.2.patch, > HIVE-19317.3.patch, HIVE-19317.4.patch > > > If int like type is changed to decimal on parquet data, select results in > errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19120) catalog not properly set for some tables in SQL upgrade scripts
[ https://issues.apache.org/jira/browse/HIVE-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459905#comment-16459905 ] Hive QA commented on HIVE-19120: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10605/dev-support/hive-personality.sh | | git revision | master / 41de953 | | Default Java | 1.8.0_111 | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10605/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > catalog not properly set for some tables in SQL upgrade scripts > --- > > Key: HIVE-19120 > URL: https://issues.apache.org/jira/browse/HIVE-19120 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19120.patch > > > A catalog column is added to the PARTITION_EVENTS and NOTIFICATION_LOG but > the upgrade scripts do not include an UPDATE statement to set this to the > default value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19376) Statistics: switch to 10bit HLL by default for Hive
[ https://issues.apache.org/jira/browse/HIVE-19376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V reassigned HIVE-19376: -- Assignee: Gopal V > Statistics: switch to 10bit HLL by default for Hive > --- > > Key: HIVE-19376 > URL: https://issues.apache.org/jira/browse/HIVE-19376 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > Attachments: HIVE-19376.1.patch > > > This reduces the memory usage for the metastore cache and the size of > bit-vectors in the DB by 16x. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19376) Statistics: switch to 10bit HLL by default for Hive
[ https://issues.apache.org/jira/browse/HIVE-19376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-19376: --- Status: Patch Available (was: Open) > Statistics: switch to 10bit HLL by default for Hive > --- > > Key: HIVE-19376 > URL: https://issues.apache.org/jira/browse/HIVE-19376 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Gopal V >Priority: Major > Attachments: HIVE-19376.1.patch > > > This reduces the memory usage for the metastore cache and the size of > bit-vectors in the DB by 16x. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19376) Statistics: switch to 10bit HLL by default for Hive
[ https://issues.apache.org/jira/browse/HIVE-19376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-19376: --- Attachment: HIVE-19376.1.patch > Statistics: switch to 10bit HLL by default for Hive > --- > > Key: HIVE-19376 > URL: https://issues.apache.org/jira/browse/HIVE-19376 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Gopal V >Priority: Major > Attachments: HIVE-19376.1.patch > > > This reduces the memory usage for the metastore cache and the size of > bit-vectors in the DB by 16x. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19231) Beeline generates garbled output when using UnsupportedTerminal
[ https://issues.apache.org/jira/browse/HIVE-19231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459891#comment-16459891 ] Aihua Xu commented on HIVE-19231: - Sounds good. +1. > Beeline generates garbled output when using UnsupportedTerminal > --- > > Key: HIVE-19231 > URL: https://issues.apache.org/jira/browse/HIVE-19231 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 2.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Major > Attachments: HIVE-19231.patch > > > We had a customer that was using some sort of front end that would invoke > beeline commands with some query files on a node that that remote to the HS2 > node. > So beeline runs locally on this edge but connects to a remote HS2. Since the > fix made in HIVE-14342, the beeline started producing garbled line in the > output. Something like > {code:java} > ^Mnull ^Mnull^Mnull > ^Mnull00- All Occupations > 135185230 42270 > 11- Management occupations 6152650 100310{code} > > I havent been able to reproduce the issue locally as I do not have their > system, but with some additional instrumentation I have been able to get some > info regarding the beeline process. > Essentially, such invocation causes beeline process to run with > {{-Djline.terminal=jline.UnsupportedTerminal}} all the time and thus causes > the issue. They can run the same beeline command directly in the shell on the > same host and it does not cause this issue. > PID S TTY TIME COMMAND > 44107 S S ? 00:00:00 bash beeline -u ... > PID S TTY TIME COMMAND > 48453 S+ S pts/4 00:00:00 bash beeline -u ... > Somehow that process wasnt attached to any local terminals. So the check made > for /dev/stdin wouldnt work. > > Instead an additional check to check the TTY session of the process before > using the UnsupportedTerminal (which really should only be used for > backgrounded beeline sessions) seems to resolve the issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19231) Beeline generates garbled output when using UnsupportedTerminal
[ https://issues.apache.org/jira/browse/HIVE-19231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459875#comment-16459875 ] Naveen Gangam commented on HIVE-19231: -- [~aihuaxu] Running a beeline command using a cron job seems to be resulting in similar output that we were seeing that caused this issue. {code:java} PID S TTY TIME COMMAND 22756 ? S ? 00:00:00 bash /usr/lib/hive/bin/beeline -u jdbc:hive2://localhost:1 -n hive -p hive -e select * from sample_08 where salary > 5 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0 scan complete in 2ms Connecting to jdbc:hive2://localhost:1 Connected to: Apache Hive (version 1.1.0-cdh5.14.3-SNAPSHOT) Driver: Hive JDBC (version 1.1.0-cdh5.14.3-SNAPSHOT) Transaction isolation: TRANSACTION_REPEATABLE_READ INFO : Compiling command(queryId=hive_20180501100505_a212817e-e909-4e95-a449-581229c9dbc0): select * from sample_08 where salary > 5 INFO : Semantic Analysis Completed INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:sample_08.code, type:string, comment:null), FieldSchema(name:sample_08.description, type:string, comment:null), FieldSchema(name:samp le_08.total_emp, type:int, comment:null), FieldSchema(name:sample_08.salary, type:int, comment:null)], properties:null) INFO : Completed compiling command(queryId=hive_20180501100505_a212817e-e909-4e95-a449-581229c9dbc0); Time taken: 0.123 seconds INFO : Executing command(queryId=hive_20180501100505_a212817e-e909-4e95-a449-581229c9dbc0): select * from sample_08 where salary > 5 INFO : Query ID = hive_20180501100505_a212817e-e909-4e95-a449-581229c9dbc0 INFO : Total jobs = 1 INFO : Launching Job 1 out of 1 INFO : Starting task [Stage-1:MAPRED] in serial mode INFO : Number of reduce tasks is set to 0 since there's no reduce operator INFO : number of splits:1 INFO : Submitting tokens for job: job_1525165852892_0012 INFO : The url to track the job: http://:8088/proxy/application_1525165852892_0012/ INFO : Starting Job = job_1525165852892_0012, Tracking URL = http://:8088/proxy/application_1525165852892_0012/ INFO : Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1525165852892_0012 INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 INFO : 2018-05-01 10:05:10,966 Stage-1 map = 0%, reduce = 0% INFO : 2018-05-01 10:05:18,294 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.89 sec INFO : MapReduce Total cumulative CPU time: 3 seconds 890 msec INFO : Ended Job = job_1525165852892_0012 INFO : MapReduce Jobs Launched: INFO : Stage-Stage-1: Map: 1 Cumulative CPU: 3.89 sec HDFS Read: 50608 HDFS Write: 16567 SUCCESS INFO : Total MapReduce CPU Time Spent: 3 seconds 890 msec INFO : Completed executing command(queryId=hive_20180501100505_a212817e-e909-4e95-a449-581229c9dbc0); Time taken: 13.981 seconds INFO : OK {code} However, running as a cron job with or without the fix does not seem to reproduce the issue whether running the script runs beeline in background or foreground (I suspect the way cron is run, it can never be run as a foreground process). So it appears we are good for this usecase as well. Do you have additional concerns? Thanks > Beeline generates garbled output when using UnsupportedTerminal > --- > > Key: HIVE-19231 > URL: https://issues.apache.org/jira/browse/HIVE-19231 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 2.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Major > Attachments: HIVE-19231.patch > > > We had a customer that was using some sort of front end that would invoke > beeline commands with some query files on a node that that remote to the HS2 > node. > So beeline runs locally on this edge but connects to a remote HS2. Since the > fix made in HIVE-14342, the beeline started producing garbled line in the > output. Something like > {code:java} > ^Mnull ^Mnull^Mnull > ^Mnull00- All Occupations > 135185230 42270 > 11- Management occupations 6152650 100310{code} > > I havent been able to reproduce the issue locally as I do not have their > system, but with some additional instrumentation I have been able to get some > info regarding the beeline process. > Essentially, such invocation causes beeline process to run with >
[jira] [Assigned] (HIVE-19375) "'transactional'='false' is no longer a valid property and will be ignored:
[ https://issues.apache.org/jira/browse/HIVE-19375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-19375: - > "'transactional'='false' is no longer a valid property and will be ignored: > > > Key: HIVE-19375 > URL: https://issues.apache.org/jira/browse/HIVE-19375 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > > from {{TransactionalValidationListener.handleCreateTableTransactionalProp()}} > {noformat} > if ("false".equalsIgnoreCase(transactional)) { > // just drop transactional=false. For backward compatibility in case > someone has scripts > // with transactional=false > LOG.info("'transactional'='false' is no longer a valid property and > will be ignored: " + > Warehouse.getQualifiedName(newTable)); > return; > } > {noformat} > this msg is misleading since with metastore.create.as.acid=true, setting > transactional=false is valid to make a flat table -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19304) Update templates.py based on config changes in YARN-7142 and YARN-8122
[ https://issues.apache.org/jira/browse/HIVE-19304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459871#comment-16459871 ] Hive QA commented on HIVE-19304: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12921329/HIVE-19304.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 34 failed/errored test(s), 14270 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.metastore.TestStats.partitionedTableInHiveCatalog (batchId=211) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=242) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10604/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10604/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10604/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 34 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12921329 - PreCommit-HIVE-Build > Update templates.py based on config changes in YARN-7142 and YARN-8122 > -- > > Key: HIVE-19304 > URL: https://issues.apache.org/jira/browse/HIVE-19304 > Project: Hive > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha >Priority: Major > Attachments: HIVE-19304.001.patch, HIVE-19304.01.patch > > > Now that YARN-7142 is committed and YARN-8122 will be committed soon, we need > to update templates.py based on
[jira] [Commented] (HIVE-18570) ACID IOW implemented using base may delete too much data
[ https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459868#comment-16459868 ] Eugene Koifman commented on HIVE-18570: --- [~gopalv] longer term we should should support snapshot isolation with optimistic concurrency (as much as possible) and read committed with pessimistic lock based concurrency. Optimistic conflict detection currently only tracks update/delete conflicts and making it track inserts as well is a bigger change than I want to do for 3.0 > ACID IOW implemented using base may delete too much data > > > Key: HIVE-18570 > URL: https://issues.apache.org/jira/browse/HIVE-18570 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Eugene Koifman >Priority: Blocker > Attachments: HIVE-18570.01-branch-3.patch, HIVE-18570.01.patch > > > Suppose we have a table with delta_0 insert data. > Txn 1 starts an insert into delta_1. > Txn 2 starts an IOW into base_2. > Txn 2 commits. > Txn 1 commits after txn 2 but its results would be invisible. > Txn 2 deletes rows committed by txn 1 that according to standard ACID > semantics it could have never observed and affected; this sequence of events > is only possible under read-uncommitted isolation level (so, 2 deletes rows > written by 1 before 1 commits them). > This is if we look at IOW as transactional delete+insert. Otherwise we are > just saying IOW performs "semi"-transactional delete. > If 1 ran an update on rows instead of an insert, and 2 still ran an > IOW/delete, row lock conflict (or equivalent) should cause one of them to > fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19369) Locks: Add new lock implementations for always zero-wait readers
[ https://issues.apache.org/jira/browse/HIVE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459865#comment-16459865 ] Gopal V commented on HIVE-19369: Yes, despite the big description above - that new lock is 90% of this JIRA adds, rest of this is just documentation Here's the current LockType SHARED_READ(1), SHARED_WRITE(2), EXCLUSIVE(3); We already have 3 of the above asks, just needs to add a new one (the one which you mentioned) SHARED_READ SHARED_WRITE EXCLUSIVE | EXCL_DROP + EXCL_WRITE And the only other behaviour change when EXCLUSIVE gets more granularity is that the EXCL_DROP is no-wait for the readers, while current EXCLUSIVE is a wait for the readers as well. > Locks: Add new lock implementations for always zero-wait readers > > > Key: HIVE-19369 > URL: https://issues.apache.org/jira/browse/HIVE-19369 > Project: Hive > Issue Type: Improvement >Reporter: Gopal V >Priority: Major > > Hive Locking with Micro-managed and full-ACID tables needs a better locking > implementation which allows for no-wait readers always. > EXCL_DROP > EXCL_WRITE > SHARED_WRITE > SHARED_READ > Short write-up > EXCL_DROP is a "drop partition" or "drop table" and waits for all others to > exit > EXCL_WRITE excludes all writes and will wait for all existing SHARED_WRITE to > exit. > SHARED_WRITE allows all SHARED_WRITES to go through, but will wait for an > EXCL_WRITE & EXCL_DROP (waiting so that you can do drop + insert in different > threads). > SHARED_READ does not wait for any lock - it fails fast for a pending > EXCL_DROP, because even if there is an EXCL_WRITE or SHARED_WRITE pending, > there's no semantic reason to wait for them to succeed before going ahead > with a SHARED_WRITE. > a select * => SHARED_READ > an insert into => SHARED_WRITE > an insert overwrite or MERGE => EXCL_WRITE > a drop table => EXCL_DROP > TODO: > The fate of the compactor needs to be added to this before it is a complete > description. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-6980) Drop table by using direct sql
[ https://issues.apache.org/jira/browse/HIVE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459857#comment-16459857 ] Alexander Kolbasov commented on HIVE-6980: -- [~pvary] What is the master commit ID that your patch is based on? > Drop table by using direct sql > -- > > Key: HIVE-6980 > URL: https://issues.apache.org/jira/browse/HIVE-6980 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 0.12.0 >Reporter: Selina Zhang >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-6980.2.patch, HIVE-6980.patch > > > Dropping table which has lots of partitions is slow. Even after applying the > patch of HIVE-6265, the drop table still takes hours (100K+ partitions). > The fixes come with two parts: > 1. use directSQL to query the partitions protect mode; > the current implementation needs to transfer the Partition object to client > and check the protect mode for each partition. I'd like to move this part of > logic to metastore. The check will be done by direct sql (if direct sql is > disabled, execute the same logic in the ObjectStore); > 2. use directSQL to drop partitions for table; > there maybe two solutions here: > 1. add "DELETE CASCADE" in the schema. In this way we only need to delete > entries from partitions table use direct sql. May need to change > datanucleus.deletionPolicy = DataNucleus. > 2. clean up the dependent tables by issue DELETE statement. This also needs > to turn on datanucleus.query.sql.allowAll > Both of above solutions should be able to fix the problem. The DELETE CASCADE > has to change schemas and prepare upgrade scripts. The second solutions added > maintenance cost if new tables added in the future releases. > Please advice. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-6980) Drop table by using direct sql
[ https://issues.apache.org/jira/browse/HIVE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459855#comment-16459855 ] Alexander Kolbasov commented on HIVE-6980: -- [~pvary] Will review this week. > Drop table by using direct sql > -- > > Key: HIVE-6980 > URL: https://issues.apache.org/jira/browse/HIVE-6980 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 0.12.0 >Reporter: Selina Zhang >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-6980.2.patch, HIVE-6980.patch > > > Dropping table which has lots of partitions is slow. Even after applying the > patch of HIVE-6265, the drop table still takes hours (100K+ partitions). > The fixes come with two parts: > 1. use directSQL to query the partitions protect mode; > the current implementation needs to transfer the Partition object to client > and check the protect mode for each partition. I'd like to move this part of > logic to metastore. The check will be done by direct sql (if direct sql is > disabled, execute the same logic in the ObjectStore); > 2. use directSQL to drop partitions for table; > there maybe two solutions here: > 1. add "DELETE CASCADE" in the schema. In this way we only need to delete > entries from partitions table use direct sql. May need to change > datanucleus.deletionPolicy = DataNucleus. > 2. clean up the dependent tables by issue DELETE statement. This also needs > to turn on datanucleus.query.sql.allowAll > Both of above solutions should be able to fix the problem. The DELETE CASCADE > has to change schemas and prepare upgrade scripts. The second solutions added > maintenance cost if new tables added in the future releases. > Please advice. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19371) Add table ownerType to HMS thrift API
[ https://issues.apache.org/jira/browse/HIVE-19371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña reassigned HIVE-19371: -- Assignee: Sergio Peña > Add table ownerType to HMS thrift API > - > > Key: HIVE-19371 > URL: https://issues.apache.org/jira/browse/HIVE-19371 > Project: Hive > Issue Type: Sub-task > Components: Hive, Metastore >Affects Versions: 3.0.0 >Reporter: Sergio Peña >Assignee: Sergio Peña >Priority: Major > > Subtasks that adds the ownerType field to the Table object of the HMS Thrift > API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-19371) Add table ownerType to HMS thrift API
[ https://issues.apache.org/jira/browse/HIVE-19371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-19371 started by Sergio Peña. -- > Add table ownerType to HMS thrift API > - > > Key: HIVE-19371 > URL: https://issues.apache.org/jira/browse/HIVE-19371 > Project: Hive > Issue Type: Sub-task > Components: Hive, Metastore >Affects Versions: 3.0.0 >Reporter: Sergio Peña >Assignee: Sergio Peña >Priority: Major > > Subtasks that adds the ownerType field to the Table object of the HMS Thrift > API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18762) Support ALTER TABLE SET OWNER command
[ https://issues.apache.org/jira/browse/HIVE-18762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-18762: --- Description: Currently only a user can be a owner of hive table. It should be extended so that either user/role can be set a owner of a table. With this support ownership of a table can be transferred to either user or role. Should be able to run below commands and change the ownership {noformat} alter table tb1 set owner user user1; alter table tb1 set owner role role1;{noformat} was: Currently only a user can be a owner of hive table. It should be extended so that either user/role can be set a owner of a table. With this support ownership of a table can be transferred to either user or role I think, this is already available for hive databases. > Support ALTER TABLE SET OWNER command > - > > Key: HIVE-18762 > URL: https://issues.apache.org/jira/browse/HIVE-18762 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: kalyan kumar kalvagadda >Assignee: Sergio Peña >Priority: Major > > Currently only a user can be a owner of hive table. It should be extended so > that either user/role can be set a owner of a table. > With this support ownership of a table can be transferred to either user or > role. > Should be able to run below commands and change the ownership > {noformat} > alter table tb1 set owner user user1; > alter table tb1 set owner role role1;{noformat} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18762) Support ALTER TABLE SET OWNER command
[ https://issues.apache.org/jira/browse/HIVE-18762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-18762: --- Summary: Support ALTER TABLE SET OWNER command (was: Extend the current ownership support for tables) > Support ALTER TABLE SET OWNER command > - > > Key: HIVE-18762 > URL: https://issues.apache.org/jira/browse/HIVE-18762 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: kalyan kumar kalvagadda >Assignee: Sergio Peña >Priority: Major > > Currently only a user can be a owner of hive table. It should be extended so > that either user/role can be set a owner of a table. > With this support ownership of a table can be transferred to either user or > role > I think, this is already available for hive databases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18762) Extend the current ownership support for tables
[ https://issues.apache.org/jira/browse/HIVE-18762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña reassigned HIVE-18762: -- Assignee: Sergio Peña > Extend the current ownership support for tables > --- > > Key: HIVE-18762 > URL: https://issues.apache.org/jira/browse/HIVE-18762 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: kalyan kumar kalvagadda >Assignee: Sergio Peña >Priority: Major > > Currently only a user can be a owner of hive table. It should be extended so > that either user/role can be set a owner of a table. > With this support ownership of a table can be transferred to either user or > role > I think, this is already available for hive databases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19304) Update templates.py based on config changes in YARN-7142 and YARN-8122
[ https://issues.apache.org/jira/browse/HIVE-19304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459798#comment-16459798 ] Hive QA commented on HIVE-19304: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 10m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10604/dev-support/hive-personality.sh | | git revision | master / 41de953 | | Default Java | 1.8.0_111 | | modules | C: llap-server U: llap-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10604/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Update templates.py based on config changes in YARN-7142 and YARN-8122 > -- > > Key: HIVE-19304 > URL: https://issues.apache.org/jira/browse/HIVE-19304 > Project: Hive > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha >Priority: Major > Attachments: HIVE-19304.001.patch, HIVE-19304.01.patch > > > Now that YARN-7142 is committed and YARN-8122 will be committed soon, we need > to update templates.py based on config changes for placement policy and > health threshold monitor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18958) Fix Spark config warnings
[ https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459775#comment-16459775 ] Bharathkrishna Guruvayoor Murali commented on HIVE-18958: - Thanks [~stakiar] for the review. > Fix Spark config warnings > - > > Key: HIVE-18958 > URL: https://issues.apache.org/jira/browse/HIVE-18958 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, > HIVE-18958.03.patch, HIVE-18958.testDiff.patch > > > Getting a few configuration warnings in the logs that we should fix: > {code} > 2018-03-14T10:06:19,164 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has > been deprecated as of Spark 2.3 and may be removed in the future. Please use > the new key 'spark.driver.memoryOverhead' instead. > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not > supported any more because Spark doesn't use Akka since 2.0 > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' > has been deprecated as of Spark 2.3 and may be removed in the future. Please > use the new key 'spark.executor.memoryOverhead' instead. > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.server.connect.timeout=9 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.threads=8 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.connect.timeout=3 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.secret.bits=256 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.max.size=52428800 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19211) New streaming ingest API and support for dynamic partitioning
[ https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459768#comment-16459768 ] Hive QA commented on HIVE-19211: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12921330/HIVE-19211.10.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 50 failed/errored test(s), 14300 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_13] (batchId=253) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_1] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_2] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin_negative3] (batchId=29) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_1_23] (batchId=81) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_skew_1_23] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_13] (batchId=32) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sort_merge_join_desc_7] (batchId=27) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=183) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketmapjoin6] (batchId=183) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_1] (batchId=137) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_2] (batchId=133) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketmapjoin_negative3] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby_sort_1_23] (batchId=143) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby_sort_skew_1_23] (batchId=111) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_13] (batchId=122) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242)
[jira] [Updated] (HIVE-18570) ACID IOW implemented using base may delete too much data
[ https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18570: -- Attachment: HIVE-18570.01-branch-3.patch > ACID IOW implemented using base may delete too much data > > > Key: HIVE-18570 > URL: https://issues.apache.org/jira/browse/HIVE-18570 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Eugene Koifman >Priority: Blocker > Attachments: HIVE-18570.01-branch-3.patch, HIVE-18570.01.patch > > > Suppose we have a table with delta_0 insert data. > Txn 1 starts an insert into delta_1. > Txn 2 starts an IOW into base_2. > Txn 2 commits. > Txn 1 commits after txn 2 but its results would be invisible. > Txn 2 deletes rows committed by txn 1 that according to standard ACID > semantics it could have never observed and affected; this sequence of events > is only possible under read-uncommitted isolation level (so, 2 deletes rows > written by 1 before 1 commits them). > This is if we look at IOW as transactional delete+insert. Otherwise we are > just saying IOW performs "semi"-transactional delete. > If 1 ran an update on rows instead of an insert, and 2 still ran an > IOW/delete, row lock conflict (or equivalent) should cause one of them to > fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459735#comment-16459735 ] Alan Gates commented on HIVE-19135: --- {quote}However, I thought that connection/statement close would automatically rollback the txn. {quote} I thought so too, but at least in Derby my tests showed otherwise. I got deadlocks when I didn't explicitly rollback. > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19135.2.patch, HIVE-19135.3.patch, > HIVE-19135.4.patch, HIVE19135.patch > > > As part of upgrading to Hive 3 admins may wish to create new catalogs and > move some existing databases into those catalogs. We can do this by adding > options to schematool. This guarantees that only admins can do these > operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19211) New streaming ingest API and support for dynamic partitioning
[ https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459730#comment-16459730 ] Hive QA commented on HIVE-19211: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 43s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 36s{color} | {color:red} hive-unit in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s{color} | {color:red} serde: The patch generated 391 new + 0 unchanged - 0 fixed = 391 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} hcatalog/streaming: The patch generated 24 new + 195 unchanged - 29 fixed = 219 total (was 224) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s{color} | {color:red} itests/hive-unit: The patch generated 109 new + 55 unchanged - 27 fixed = 164 total (was 82) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s{color} | {color:red} metastore: The patch generated 37 new + 8 unchanged - 0 fixed = 45 total (was 8) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 47s{color} | {color:red} ql: The patch generated 1 new + 402 unchanged - 0 fixed = 403 total (was 402) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s{color} | {color:red} standalone-metastore: The patch generated 3 new + 41 unchanged - 0 fixed = 44 total (was 41) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} streaming: The patch generated 614 new + 65 unchanged - 349 fixed = 679 total (was 414) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} serde in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} streaming in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} hive-unit in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} metastore in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} ql in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} standalone-metastore in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} streaming generated 0 new + 0 unchanged - 5 fixed = 0 total (was 5) {color} | || || || ||
[jira] [Updated] (HIVE-18958) Fix Spark config warnings
[ https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18958: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Pushed to master, thanks [~bharos92] for the contribution! > Fix Spark config warnings > - > > Key: HIVE-18958 > URL: https://issues.apache.org/jira/browse/HIVE-18958 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, > HIVE-18958.03.patch, HIVE-18958.testDiff.patch > > > Getting a few configuration warnings in the logs that we should fix: > {code} > 2018-03-14T10:06:19,164 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has > been deprecated as of Spark 2.3 and may be removed in the future. Please use > the new key 'spark.driver.memoryOverhead' instead. > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not > supported any more because Spark doesn't use Akka since 2.0 > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' > has been deprecated as of Spark 2.3 and may be removed in the future. Please > use the new key 'spark.executor.memoryOverhead' instead. > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.server.connect.timeout=9 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.threads=8 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.connect.timeout=3 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.secret.bits=256 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.max.size=52428800 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19317) Handle schema evolution from int like types to decimal
[ https://issues.apache.org/jira/browse/HIVE-19317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459704#comment-16459704 ] Hive QA commented on HIVE-19317: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12921326/HIVE-19317.3.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 39 failed/errored test(s), 14270 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.hcatalog.pig.TestParquetHCatStorer.testDateCharTypes (batchId=196) org.apache.hive.hcatalog.pig.TestParquetHCatStorer.testWriteDecimal (batchId=196) org.apache.hive.hcatalog.pig.TestParquetHCatStorer.testWriteDecimalX (batchId=196) org.apache.hive.hcatalog.pig.TestParquetHCatStorer.testWriteDecimalXY (batchId=196) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10602/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10602/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10602/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 39 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12921326 - PreCommit-HIVE-Build > Handle schema evolution from int like types to decimal > -- > > Key:
[jira] [Commented] (HIVE-19317) Handle schema evolution from int like types to decimal
[ https://issues.apache.org/jira/browse/HIVE-19317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459669#comment-16459669 ] Hive QA commented on HIVE-19317: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} ql: The patch generated 0 new + 17 unchanged - 2 fixed = 17 total (was 19) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10602/dev-support/hive-personality.sh | | git revision | master / 758b913 | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10602/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Handle schema evolution from int like types to decimal > -- > > Key: HIVE-19317 > URL: https://issues.apache.org/jira/browse/HIVE-19317 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-19317.1.patch, HIVE-19317.2.patch, > HIVE-19317.3.patch > > > If int like type is changed to decimal on parquet data, select results in > errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19365) Index on COMPLETED_TXN_COMPONENTS in Metastore RDBMS has different names in different scripts
[ https://issues.apache.org/jira/browse/HIVE-19365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459659#comment-16459659 ] Hive QA commented on HIVE-19365: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12921328/HIVE-19365.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 14270 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_subq_exists] (batchId=80) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMmConversionLocks (batchId=300) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10601/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10601/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10601/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 35 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12921328 - PreCommit-HIVE-Build > Index on COMPLETED_TXN_COMPONENTS in Metastore RDBMS has different names in > different scripts > - > > Key: HIVE-19365 > URL: https://issues.apache.org/jira/browse/HIVE-19365 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority:
[jira] [Commented] (HIVE-19365) Index on COMPLETED_TXN_COMPONENTS in Metastore RDBMS has different names in different scripts
[ https://issues.apache.org/jira/browse/HIVE-19365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459626#comment-16459626 ] Hive QA commented on HIVE-19365: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10601/dev-support/hive-personality.sh | | git revision | master / 758b913 | | Default Java | 1.8.0_111 | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10601/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Index on COMPLETED_TXN_COMPONENTS in Metastore RDBMS has different names in > different scripts > - > > Key: HIVE-19365 > URL: https://issues.apache.org/jira/browse/HIVE-19365 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-19365.patch > > > In mysql and mssql install scripts the index is called > COMPLETED_TXN_COMPONENTS_IDX2 Everywhere else it is called > COMPLETED_TXN_COMPONENTS_IDX, which is breaking upgrade scripts for 3.0 to > 3.1 since they don't know which index to update. One name should be chosen > and used everywhere. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19360) CBO: Add an "optimizedSQL" to QueryPlan object
[ https://issues.apache.org/jira/browse/HIVE-19360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459614#comment-16459614 ] Hive QA commented on HIVE-19360: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12921325/HIVE-19360.3.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 219 failed/errored test(s), 14270 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_directory] (batchId=256) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[write_final_output_blobstore] (batchId=256) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_coltype] (batchId=27) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ambiguitycheck] (batchId=75) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[analyze_table_null_partition] (batchId=84) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_1] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_2] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_reordering_values] (batchId=6) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket1] (batchId=43) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket2] (batchId=52) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket3] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark1] (batchId=70) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark2] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark3] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark4] (batchId=1) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[combine2] (batchId=6) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[comments] (batchId=38) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constantPropagateForSubQuery] (batchId=65) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynamic_partition_skip_default] (batchId=83) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[extrapolate_part_stats_date] (batchId=21) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[extrapolate_part_stats_full] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[extrapolate_part_stats_partial] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_aggr] (batchId=85) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_join_breaktask] (batchId=77) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_union] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fouter_join_ppr] (batchId=33) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_1_23] (batchId=81) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_6] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_skew_1_23] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input23] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input42] (batchId=77) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input4] (batchId=84) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input_part1] (batchId=8) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input_part9] (batchId=26) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join17] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join26] (batchId=20) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join32] (batchId=19) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join33] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join34] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join35] (batchId=67) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join9] (batchId=50) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_filters_overlap] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_11] (batchId=19) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_12] (batchId=52) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_13]
[jira] [Commented] (HIVE-19305) Arrow format for LlapOutputFormatService (umbrella)
[ https://issues.apache.org/jira/browse/HIVE-19305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459592#comment-16459592 ] Uwe L. Korn commented on HIVE-19305: Nice to see that this is happening. Do you already have a first use case where you want to utilise Arrow to connect clients? > Arrow format for LlapOutputFormatService (umbrella) > --- > > Key: HIVE-19305 > URL: https://issues.apache.org/jira/browse/HIVE-19305 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Eric Wohlstadter >Assignee: Eric Wohlstadter >Priority: Major > > Allows external clients to consume output from LLAP daemons in Arrow stream > format. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19360) CBO: Add an "optimizedSQL" to QueryPlan object
[ https://issues.apache.org/jira/browse/HIVE-19360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459573#comment-16459573 ] Hive QA commented on HIVE-19360: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 50s{color} | {color:red} ql: The patch generated 4 new + 469 unchanged - 0 fixed = 473 total (was 469) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10599/dev-support/hive-personality.sh | | git revision | master / 758b913 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10599/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-10599/yetus/whitespace-eol.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10599/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > CBO: Add an "optimizedSQL" to QueryPlan object > --- > > Key: HIVE-19360 > URL: https://issues.apache.org/jira/browse/HIVE-19360 > Project: Hive > Issue Type: Improvement > Components: CBO, Diagnosability >Affects Versions: 3.1.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > Attachments: HIVE-19360.1.patch, HIVE-19360.2.patch, > HIVE-19360.3.patch > > > Calcite RelNodes can be converted back into SQL (as the new JDBC storage > handler does), which allows Hive to print out the post CBO plan as a SQL > query instead of having to guess the join orders from the subsequent Tez plan. > The query generated might not be always valid SQL at this point, but is a > world ahead of DAG plans in readability. > Eg. tpc-ds Query4 CTEs gets expanded to > {code} > SELECT t16.$f3 customer_preferred_cust_flag > FROM > (SELECT t0.c_customer_id $f0, >SUM((t2.ws_ext_list_price - > t2.ws_ext_wholesale_cost - t2.ws_ext_discount_amt + t2.ws_ext_sales_price) / > CAST(2 AS DECIMAL(10, 0))) $f8 >FROM > (SELECT c_customer_sk, > c_customer_id, > c_first_name, > c_last_name, > c_preferred_cust_flag, > c_birth_country, > c_login, > c_email_address > FROM default.customer > WHERE c_customer_sk IS NOT NULL > AND c_customer_id IS NOT NULL) t0
[jira] [Commented] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459569#comment-16459569 ] Mass Dosage commented on HIVE-18767: [~pvary] I have run all the unit tests locally after doing a "mvn install" and they pass (with the exception of "TestDanglingQOuts" which I had to disable). The above test don't appear to fail on my machine and also don't appear related to the changes I made. Do we need to kick the build off again? Is something going wrong on the build machine? > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Fix For: 2.3.3 > > Attachments: HIVE-18767-branch-2.3.patch, HIVE-18767.1.patch, > HIVE-18767.2.patch, HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459557#comment-16459557 ] Hive QA commented on HIVE-19135: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12921322/HIVE-19135.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 14282 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=183) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning (batchId=309) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveBackKill (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=241) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10597/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10597/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10597/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 35 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12921322 - PreCommit-HIVE-Build > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >
[jira] [Commented] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459529#comment-16459529 ] Hive QA commented on HIVE-19135: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 50s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} beeline: The patch generated 7 new + 65 unchanged - 1 fixed = 72 total (was 66) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} itests/hive-unit: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 20m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10597/dev-support/hive-personality.sh | | git revision | master / 758b913 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10597/yetus/diff-checkstyle-beeline.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10597/yetus/diff-checkstyle-itests_hive-unit.txt | | modules | C: beeline itests/hive-unit standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10597/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19135.2.patch, HIVE-19135.3.patch, > HIVE-19135.4.patch, HIVE19135.patch > > > As part of upgrading to Hive 3 admins may wish to create new catalogs and > move some existing databases into those catalogs. We can do this by adding > options to schematool. This guarantees that only admins can do these > operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19314) Fix failures caused by HIVE-19137
[ https://issues.apache.org/jira/browse/HIVE-19314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459522#comment-16459522 ] Igor Kryvenko commented on HIVE-19314: -- [~kgyrtkirk] Hi. I can't reproduce failing tests on current {{master}} and {{branch-3.}} . I think they were fixed by HIVE-19269 > Fix failures caused by HIVE-19137 > - > > Key: HIVE-19314 > URL: https://issues.apache.org/jira/browse/HIVE-19314 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Igor Kryvenko >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19363) remove cryptic metrics from LLAP IO output
[ https://issues.apache.org/jira/browse/HIVE-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459514#comment-16459514 ] Hive QA commented on HIVE-19363: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12921320/HIVE-19363.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 34 failed/errored test(s), 14270 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10596/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10596/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10596/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 34 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12921320 - PreCommit-HIVE-Build > remove cryptic metrics from LLAP IO output > -- > > Key: HIVE-19363 > URL: https://issues.apache.org/jira/browse/HIVE-19363 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19363.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19363) remove cryptic metrics from LLAP IO output
[ https://issues.apache.org/jira/browse/HIVE-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459487#comment-16459487 ] Hive QA commented on HIVE-19363: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10596/dev-support/hive-personality.sh | | git revision | master / 758b913 | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10596/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > remove cryptic metrics from LLAP IO output > -- > > Key: HIVE-19363 > URL: https://issues.apache.org/jira/browse/HIVE-19363 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19363.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19362) enable LLAP cache affinity by default
[ https://issues.apache.org/jira/browse/HIVE-19362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459478#comment-16459478 ] Hive QA commented on HIVE-19362: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12921317/HIVE-19362.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 14270 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10595/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10595/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10595/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 33 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12921317 - PreCommit-HIVE-Build > enable LLAP cache affinity by default > - > > Key: HIVE-19362 > URL: https://issues.apache.org/jira/browse/HIVE-19362 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19362.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19311) Partition and bucketing support for “load data” statement
[ https://issues.apache.org/jira/browse/HIVE-19311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459475#comment-16459475 ] Prasanth Jayachandran commented on HIVE-19311: -- Left some minor comments in RB. > Partition and bucketing support for “load data” statement > - > > Key: HIVE-19311 > URL: https://issues.apache.org/jira/browse/HIVE-19311 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19311.1.patch, HIVE-19311.2.patch, > HIVE-19311.3.patch, HIVE-19311.4.patch, HIVE-19311.5.patch, > HIVE-19311.6.patch, HIVE-19311.7.patch, HIVE-19311.8.patch > > > Currently, "load data" statement is very limited. It errors out if any of the > information is missing such as partitioning info if table is partitioned or > appropriate names when table is bucketed. > It should be able to launch an insert job to load the data instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)