[jira] [Commented] (HIVE-20025) Clean-up of event files created by HiveProtoLoggingHook.
[ https://issues.apache.org/jira/browse/HIVE-20025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533319#comment-16533319 ] Hive QA commented on HIVE-20025: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 14s{color} | {color:red} /data/hiveptest/logs/PreCommit-HIVE-Build-12396/patches/PreCommit-HIVE-Build-12396.patch does not apply to master. Rebase required? Wrong Branch? See http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12396/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Clean-up of event files created by HiveProtoLoggingHook. > > > Key: HIVE-20025 > URL: https://issues.apache.org/jira/browse/HIVE-20025 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: Hive, hooks, pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-20025.01-branch-3.patch, HIVE-20025.01.patch, > HIVE-20025.02.patch, HIVE-20025.03.patch, HIVE-20025.04.patch > > > Currently, HiveProtoLoggingHook write event data to hdfs. The number of files > can grow to very large numbers. > Since the files are created under a folder with Date being a part of the > path, hive should have a way to clean up data older than a certain configured > time / date. This can be a job that can run with as little frequency as just > once a day. > This time should be set to 1 week default. There should also be a sane upper > bound of # of files so that when a large cluster generates a lot of files > during a spike, we don't force the cluster fall over. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20076) Delete on a partitioned table removes more rows than expected
[ https://issues.apache.org/jira/browse/HIVE-20076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533313#comment-16533313 ] Hive QA commented on HIVE-20076: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930244/HIVE-20076.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12395/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12395/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12395/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12930244/HIVE-20076.2.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12930244 - PreCommit-HIVE-Build > Delete on a partitioned table removes more rows than expected > - > > Key: HIVE-20076 > URL: https://issues.apache.org/jira/browse/HIVE-20076 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Attachments: HIVE-20076.2.patch, HIVE-20076.patch > > > Delete on a partitioned table removes more rows than expected -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20001) With doas set to true, running select query as hrt_qa user on external table fails due to permission denied to read /warehouse/tablespace/managed directory.
[ https://issues.apache.org/jira/browse/HIVE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533312#comment-16533312 ] Hive QA commented on HIVE-20001: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930227/HIVE-20001.4.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14642 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12394/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12394/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12394/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12930227 - PreCommit-HIVE-Build > With doas set to true, running select query as hrt_qa user on external table > fails due to permission denied to read /warehouse/tablespace/managed > directory. > > > Key: HIVE-20001 > URL: https://issues.apache.org/jira/browse/HIVE-20001 > Project: Hive > Issue Type: Bug >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20001.1.patch, HIVE-20001.1.patch, > HIVE-20001.2.patch, HIVE-20001.3.patch, HIVE-20001.4.patch > > > Hive: With doas set to true, running select query as hrt_qa user on external > table fails due to permission denied to read /warehouse/tablespace/managed > directory. > Steps: > 1. Create a external table. > 2. Set doas to true. > 3. run select count(*) using user hrt_qa. > Table creation query. > {code} > beeline -n hrt_qa -p pwd -u > "jdbc:hive2://ctr-e138-1518143905142-375925-01-06.hwx.site:2181,ctr-e138-1518143905142-375925-01-05.hwx.site:2181,ctr-e138-1518143905142-375925-01-07.hwx.site:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_h...@example.com;transportMode=http;httpPath=cliservice;ssl=true;sslTrustStore=/etc/security/serverKeys/hivetruststore.jks;trustStorePassword=changeit" > --outputformat=tsv -e "drop table if exists test_table purge; > create external table test_table(id int, age int) row format delimited fields > terminated by '|' stored as textfile; > load data inpath '/tmp/table1.dat' overwrite into table test_table; > {code} > select count(*) query execution fails > {code} > beeline -n hrt_qa -p pwd -u > "jdbc:hive2://ctr-e138-1518143905142-375925-01-06.hwx.site:2181,ctr-e138-1518143905142-375925-01-05.hwx.site:2181,ctr-e138-1518143905142-375925-01-07.hwx.site:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_h...@example.com;transportMode=http;httpPath=cliservice;ssl=true;sslTrustStore=/etc/security/serverKeys/hivetruststore.jks;trustStorePassword=changeit" > --outputformat=tsv -e "select count(*) from test_table where age>30 and > id<10100;" > 2018-06-22 10:22:29,328|INFO|Thread-126|machine.py:111 - > tee_pipe()||b3a493ec-99be-483e-91fe-4b701ec27ebc|SLF4J: Class path contains > multiple SLF4J bindings. > 2018-06-22 10:22:29,330|INFO|Thread-126|machine.py:111 - > tee_pipe()||b3a493ec-99be-483e-91fe-4b701ec27ebc|SLF4J: See > http://www.slf4j.org/codes.html#multiple_bindings for an explanation. > 2018-06-22 10:22:29,335|INFO|Thread-126|machine.py:111 - > tee_pipe()||b3a493ec-99be-483e-91fe-4b701ec27ebc|SLF4J: Actual binding is of > type [org.apache.logging.slf4j.Log4jLoggerFactory] > 2018-06-22 10:22:31,408|INFO|Thread-126|machine.py:111 - > tee_pipe()||b3a493ec-99be-483e-91fe-4b701ec27ebc|Format tsv is deprecated, > please use tsv2 > 2018-06-22 10:22:31,529|INFO|Thread-126|machine.py:111 - > tee_pipe()||b3a493ec-99be-483e-91fe-4b701ec27ebc|Connecting to > jdbc:hive2://ctr-e138-1518143905142-375925-01-06.hwx.site:2181,ctr-e138-1518143905142-375925-01-05.hwx.site:2181,ctr-e138-1518143905142-375925-01-07.hwx.site:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_h...@example.com;transportMode=http;httpPath=cliservice;ssl=true;sslTrustStore=/etc/security/serverKeys/hivetruststore.jks;trustStorePassword=changeit > 2018-06-22 10:22:32,031|INFO|Thread-126|machine.py:111 - > tee_pipe()||b3a493ec-99be-483e-91fe-4b701ec27ebc|18/06/22 10:22:32 [main]: > INFO jdbc.HiveConnection: Connected to > ctr-e138-1518143905142-375925-01-04.hwx.site:10001 > 2018-06-22 10:22:34,130|INF
[jira] [Updated] (HIVE-19860) HiveServer2 ObjectInspectorFactory memory leak with cachedUnionStructObjectInspector
[ https://issues.apache.org/jira/browse/HIVE-19860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19860: Fix Version/s: 3.2.0 pushed to branch-3 as well > HiveServer2 ObjectInspectorFactory memory leak with > cachedUnionStructObjectInspector > > > Key: HIVE-19860 > URL: https://issues.apache.org/jira/browse/HIVE-19860 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 2.1.0 > Environment: hiveserver2 Interactive with LLAP. >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19860.01.patch, HIVE-19860.02.patch, > HIVE-19860.patch, Screen Shot 2018-06-11 at 2.01.00 PM.png > > > hiveserver2 is start seeing the memory pressure once the > cachedUnionStructObjectInspector start going > [https://github.com/apache/hive/blob/master/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorFactory.java#L345] > I did not see any eviction policy for cachedUnionStructObjectInspector, so we > should implement some size or time-based eviction policy. > !Screen Shot 2018-06-11 at 2.01.00 PM.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20066) hive.load.data.owner is compared to full principal
[ https://issues.apache.org/jira/browse/HIVE-20066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-20066: Fix Version/s: 3.2.0 pushed to branch-3 as well > hive.load.data.owner is compared to full principal > -- > > Key: HIVE-20066 > URL: https://issues.apache.org/jira/browse/HIVE-20066 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-20066.1.patch > > > HIVE-19928 compares the user running HS2 to the configured owner > (hive.load.data.owner) to check if we're able to move the file with LOAD DATA > or need to copy. > This check compares the full username (that may contain the full kerberos > principal) to hive.load.data.owner. We should compare to the short username > ({{UGI.getShortUserName()}}) instead. That's used in similar context > [here|https://github.com/apache/hive/blob/f519db7eafacb4b4d2d9fe2a9e10e908d8077224/common/src/java/org/apache/hadoop/hive/common/FileUtils.java#L398]. > cc [~djaiswal] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533302#comment-16533302 ] Junjie Chen commented on HIVE-17593: [~Ferd], I haven't perform fully unit tests locally, let me delete it firstly since it will trigger hive build test. As for HIVE-17261, it depends on this issue. > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junjie Chen updated HIVE-17593: --- Attachment: (was: HIVE-17593.4.patch) > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20001) With doas set to true, running select query as hrt_qa user on external table fails due to permission denied to read /warehouse/tablespace/managed directory.
[ https://issues.apache.org/jira/browse/HIVE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533295#comment-16533295 ] Hive QA commented on HIVE-20001: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 59s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 20s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 7s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12394/dev-support/hive-personality.sh | | git revision | master / 5e2a530 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12394/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > With doas set to true, running select query as hrt_qa user on external table > fails due to permission denied to read /warehouse/tablespace/managed > directory. > > > Key: HIVE-20001 > URL: https://issues.apache.org/jira/browse/HIVE-20001 > Project: Hive > Issue Type: Bug >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20001.1.patch, HIVE-20001.1.patch, > HIVE-20001.2.patch, HIVE-20001.3.patch, HIVE-20001.4.patch > > > Hive: With doas set to true, running select query as hrt_qa user on external > table fails due to permission denied to read /warehouse/tablespace/managed > directory. > Steps: > 1. Create a external table. > 2. Set doas to true. > 3. run select count(*) using user hrt_qa. > Table creation query. > {code} >
[jira] [Updated] (HIVE-20092) Hive create table like comment
[ https://issues.apache.org/jira/browse/HIVE-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wxmimperio updated HIVE-20092: -- Attachment: image-2018-07-05-14-05-59-644.png > Hive create table like comment > -- > > Key: HIVE-20092 > URL: https://issues.apache.org/jira/browse/HIVE-20092 > Project: Hive > Issue Type: Bug > Components: SQL >Affects Versions: 1.1.0 >Reporter: wxmimperio >Priority: Major > Attachments: image-2018-07-05-14-05-59-644.png > > > In cloudera: > *Cloning tables (LIKE clause):* > *!image-2018-07-05-14-01-56-160.png!* > [https://www.cloudera.com/documentation/enterprise/5-8-x/topics/impala_create_table.html] > However can not run: > > {code:java} > hive> CREATE TABLE `test_like` LIKE `test` COMMENT 'test_like' STORED AS ORC > TBLPROPERTIES('orc.compress'='SNAPPY'); > FAILED: ParseException line 1:38 missing EOF at 'COMMENT' near 'test' > {code} > > Can I add COMMIT in sql if I create table like? > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20092) Hive create table like comment
[ https://issues.apache.org/jira/browse/HIVE-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wxmimperio updated HIVE-20092: -- Description: In cloudera: *Cloning tables (LIKE clause):* !image-2018-07-05-14-05-59-644.png! [https://www.cloudera.com/documentation/enterprise/5-8-x/topics/impala_create_table.html] However can not run: {code:java} hive> CREATE TABLE `test_like` LIKE `test` COMMENT 'test_like' STORED AS ORC TBLPROPERTIES('orc.compress'='SNAPPY'); FAILED: ParseException line 1:38 missing EOF at 'COMMENT' near 'test' {code} Can I add COMMIT in sql if I create table like? was: In cloudera: *Cloning tables (LIKE clause):* *!image-2018-07-05-14-01-56-160.png!* [https://www.cloudera.com/documentation/enterprise/5-8-x/topics/impala_create_table.html] However can not run: {code:java} hive> CREATE TABLE `test_like` LIKE `test` COMMENT 'test_like' STORED AS ORC TBLPROPERTIES('orc.compress'='SNAPPY'); FAILED: ParseException line 1:38 missing EOF at 'COMMENT' near 'test' {code} Can I add COMMIT in sql if I create table like? > Hive create table like comment > -- > > Key: HIVE-20092 > URL: https://issues.apache.org/jira/browse/HIVE-20092 > Project: Hive > Issue Type: Bug > Components: SQL >Affects Versions: 1.1.0 >Reporter: wxmimperio >Priority: Major > Attachments: image-2018-07-05-14-05-59-644.png > > > In cloudera: > *Cloning tables (LIKE clause):* > !image-2018-07-05-14-05-59-644.png! > [https://www.cloudera.com/documentation/enterprise/5-8-x/topics/impala_create_table.html] > However can not run: > > {code:java} > hive> CREATE TABLE `test_like` LIKE `test` COMMENT 'test_like' STORED AS ORC > TBLPROPERTIES('orc.compress'='SNAPPY'); > FAILED: ParseException line 1:38 missing EOF at 'COMMENT' near 'test' > {code} > > Can I add COMMIT in sql if I create table like? > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19850) Dynamic partition pruning in Tez is leading to 'No work found for tablescan' error
[ https://issues.apache.org/jira/browse/HIVE-19850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-19850: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Ganesha! > Dynamic partition pruning in Tez is leading to 'No work found for tablescan' > error > -- > > Key: HIVE-19850 > URL: https://issues.apache.org/jira/browse/HIVE-19850 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 3.0.0 >Reporter: Ganesha Shreedhara >Assignee: Ganesha Shreedhara >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19850.patch > > > > When multiple views are used along with union all, it is resulting in the > following error when dynamic partition pruning is enabled in tez. > > {code:java} > Exception in thread "main" java.lang.AssertionError: No work found for > tablescan TS[8] > at > org.apache.hadoop.hive.ql.parse.GenTezUtils.processAppMasterEvent(GenTezUtils.java:408) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.generateTaskTree(TezCompiler.java:383) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:205) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10371) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:347) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1203) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1257) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1140) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1130) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:204) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:433) > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:894) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:825) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:726) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:223) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136){code} > > *Steps to reproduce:* > set hive.execution.engine=tez; > set hive.tez.dynamic.partition.pruning=true; > CREATE TABLE t1(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t2(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t3(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > > insert into table t1 partition(dt='2018') values ('k1','v1',1,1.0,true); > insert into table t2 partition(dt='2018') values ('k2','v2',2,2.0,true); > insert into table t3 partition(dt='2018') values ('k3','v3',3,3.0,true); > > CREATE VIEW `view1` AS select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1` union all select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2`; > CREATE VIEW `view2` AS select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2` union all select > `t3`.`key`,`t3`.`value`,`t3`.`c_int`,`t3`.`c_float`,`t3`.`c_boolean`,`t3`.`dt` > from `t3`; > create table t4 as select key,value,c_int,c_float,c_boolean,dt from t1 union > all select v1.key,v1.value,v1.c_int,v1.c_float,v1.c_boolean,v1.dt from view1 > v1 join view2 v2 on v1.dt=v2.dt; > CREATE VIEW `view3` AS select > `t4`.`key`,`t4`.`value`,`t4`.`c_int`,`t4`.`c_float`,`t4`.`c_boolean`,`t4`.`dt` > from `t4` union all select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1`; > > select count(0) from view2 v2 join view3 v3 on v2.dt=v3.dt; // Throws No work > found for tablescan error -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19850) Dynamic partition pruning in Tez is leading to 'No work found for tablescan' error
[ https://issues.apache.org/jira/browse/HIVE-19850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533283#comment-16533283 ] Ashutosh Chauhan commented on HIVE-19850: - +1 > Dynamic partition pruning in Tez is leading to 'No work found for tablescan' > error > -- > > Key: HIVE-19850 > URL: https://issues.apache.org/jira/browse/HIVE-19850 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 3.0.0 >Reporter: Ganesha Shreedhara >Assignee: Ganesha Shreedhara >Priority: Major > Attachments: HIVE-19850.patch > > > > When multiple views are used along with union all, it is resulting in the > following error when dynamic partition pruning is enabled in tez. > > {code:java} > Exception in thread "main" java.lang.AssertionError: No work found for > tablescan TS[8] > at > org.apache.hadoop.hive.ql.parse.GenTezUtils.processAppMasterEvent(GenTezUtils.java:408) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.generateTaskTree(TezCompiler.java:383) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:205) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10371) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:347) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1203) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1257) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1140) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1130) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:204) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:433) > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:894) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:825) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:726) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:223) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136){code} > > *Steps to reproduce:* > set hive.execution.engine=tez; > set hive.tez.dynamic.partition.pruning=true; > CREATE TABLE t1(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t2(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t3(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > > insert into table t1 partition(dt='2018') values ('k1','v1',1,1.0,true); > insert into table t2 partition(dt='2018') values ('k2','v2',2,2.0,true); > insert into table t3 partition(dt='2018') values ('k3','v3',3,3.0,true); > > CREATE VIEW `view1` AS select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1` union all select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2`; > CREATE VIEW `view2` AS select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2` union all select > `t3`.`key`,`t3`.`value`,`t3`.`c_int`,`t3`.`c_float`,`t3`.`c_boolean`,`t3`.`dt` > from `t3`; > create table t4 as select key,value,c_int,c_float,c_boolean,dt from t1 union > all select v1.key,v1.value,v1.c_int,v1.c_float,v1.c_boolean,v1.dt from view1 > v1 join view2 v2 on v1.dt=v2.dt; > CREATE VIEW `view3` AS select > `t4`.`key`,`t4`.`value`,`t4`.`c_int`,`t4`.`c_float`,`t4`.`c_boolean`,`t4`.`dt` > from `t4` union all select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1`; > > select count(0) from view2 v2 join view3 v3 on v2.dt=v3.dt; // Throws No work > found for tablescan error -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20035) write booleans as long when serializing to druid
[ https://issues.apache.org/jira/browse/HIVE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533281#comment-16533281 ] Ashutosh Chauhan commented on HIVE-20035: - [~nishantbangarwa] Can you reattach your patch so that it may get a QA run. > write booleans as long when serializing to druid > > > Key: HIVE-20035 > URL: https://issues.apache.org/jira/browse/HIVE-20035 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20035.1.patch, HIVE-20035.patch > > > Druid expressions do not support booleans yet. > In druid expressions booleans are treated and parsed from longs, however when > we store booleans from hive they are serialized as 'true' and 'false' string > values. > Need to make serialization consistent with deserialization and write long > values when sending data to druid. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18545) Add UDF to parse complex types from json
[ https://issues.apache.org/jira/browse/HIVE-18545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533280#comment-16533280 ] Ashutosh Chauhan commented on HIVE-18545: - +1 > Add UDF to parse complex types from json > > > Key: HIVE-18545 > URL: https://issues.apache.org/jira/browse/HIVE-18545 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-18545.02.patch, HIVE-18545.03.patch, > HIVE-18545.04.patch, HIVE-18545.05.patch, HIVE-18545.06.patch, > HIVE-18545.06.patch, HIVE-18545.06.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533270#comment-16533270 ] Hive QA commented on HIVE-17751: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930212/HIVE-17751.10.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12391/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12391/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12391/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-07-05 05:26:23.356 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12391/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-07-05 05:26:23.360 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 5e2a530 HIVE-20066 : hive.load.data.owner is compared to full principal (Daniel Voros via Zoltan Haindrich) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 5e2a530 HIVE-20066 : hive.load.data.owner is compared to full principal (Daniel Voros via Zoltan Haindrich) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-07-05 05:26:24.337 + rm -rf ../yetus_PreCommit-HIVE-Build-12391 + mkdir ../yetus_PreCommit-HIVE-Build-12391 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12391 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12391/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch fatal: git apply: bad git-diff - inconsistent old filename on line 2765 error: patch failed: standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java:760 Falling back to three-way merge... Applied patch to 'standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java' with conflicts. Going to apply patch with: git apply -p1 error: patch failed: standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java:760 Falling back to three-way merge... Applied patch to 'standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java' with conflicts. /data/hiveptest/working/scratch/build.patch:6878: new blank line at EOF. + U standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java warning: 1 line adds whitespace errors. + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-12391 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12930212 - PreCommit-HIVE-Build > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, > HIVE-17751.06-standalone-metastore.patch, HIVE-17751.07.patch, > HIVE-17751.08.patch, HIVE-17751.09.patch, HIVE-17751.10.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We sh
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533271#comment-16533271 ] Hive QA commented on HIVE-17751: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930212/HIVE-17751.10.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12392/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12392/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12392/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12930212/HIVE-17751.10.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12930212 - PreCommit-HIVE-Build > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, > HIVE-17751.06-standalone-metastore.patch, HIVE-17751.07.patch, > HIVE-17751.08.patch, HIVE-17751.09.patch, HIVE-17751.10.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20039) Bucket pruning: Left Outer Join on bucketed table gives wrong result
[ https://issues.apache.org/jira/browse/HIVE-20039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533269#comment-16533269 ] Hive QA commented on HIVE-20039: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930204/HIVE-20039.01-branch-3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12390/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12390/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12390/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12930204/HIVE-20039.01-branch-3.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12930204 - PreCommit-HIVE-Build > Bucket pruning: Left Outer Join on bucketed table gives wrong result > - > > Key: HIVE-20039 > URL: https://issues.apache.org/jira/browse/HIVE-20039 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0, 2.3.2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-20039.01-branch-3.patch, HIVE-20039.1.patch, > HIVE-20039.2.patch, HIVE-20039.3.patch, HIVE-20039.4.patch > > > Left outer join on bucketed table on certain cases gives wrong results. > Depending on the order in which the table-scans are walked through, the > FilterPruner might end up using the wrong table scan's table properties on > the other table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18852) Misleading error message in alter table validation
[ https://issues.apache.org/jira/browse/HIVE-18852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533268#comment-16533268 ] Hive QA commented on HIVE-18852: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930203/HIVE-18852.1.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14639 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12389/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12389/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12389/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12930203 - PreCommit-HIVE-Build > Misleading error message in alter table validation > -- > > Key: HIVE-18852 > URL: https://issues.apache.org/jira/browse/HIVE-18852 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.4.0 >Reporter: Dan Burkert >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18852.1.patch > > > The metastore's validation error message when attempting to rename a table to > a non-existent database is wrong. For instance, attempting to alter table > 'db.table' to 'non_existent_database.table' results in the Thrift error: > {{TException - service has thrown: InvalidOperationException(message=Unable > to change partition or table. Database db does not exist Check metastore logs > for detailed stack.non_existent_database)}} > I believe the offending line of code is > [here|https://github.com/apache/hive/blob/branch-2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java?utf8=%E2%9C%93#L331-L333], > notice that {{dbname}} is used in the message, not {{newDbName}}. I don't > know if switching that would cause the case of a non-existing {{dbname}} case > to regress, though. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533267#comment-16533267 ] Ferdinand Xu commented on HIVE-17593: - Any unit test to cover the change for the new patch? And do we need to resolve HIVE-17261 first so that searching argument can finally be used? > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, > HIVE-17593.4.patch, HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junjie Chen updated HIVE-17593: --- Attachment: HIVE-17593.4.patch > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, > HIVE-17593.4.patch, HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533260#comment-16533260 ] Junjie Chen commented on HIVE-17593: [~Ferd], I may understand the definition in wrong way. As I listed definition in above, length, comparison, and hashcode should be ignored for HiveChar, so we should not change LENGTH(column) all to 5 in qtest result. Furthermore, I checked HiveChar conversion in other places, such as PrimitiveObjectInspectorConverter.java and PrimitiveObjectInspectorOrUtils.java in hive serder2 package, they use stripped value explicitly. So I think the easy way is to change ConvertAstToSeachArgs.java to use stripped value for HiveChar as well. > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18852) Misleading error message in alter table validation
[ https://issues.apache.org/jira/browse/HIVE-18852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533258#comment-16533258 ] Hive QA commented on HIVE-18852: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 21s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 19s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12389/dev-support/hive-personality.sh | | git revision | master / 5e2a530 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12389/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Misleading error message in alter table validation > -- > > Key: HIVE-18852 > URL: https://issues.apache.org/jira/browse/HIVE-18852 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.4.0 >Reporter: Dan Burkert >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18852.1.patch > > > The metastore's validation error message when attempting to rename a table to > a non-existent database is wrong. For instance, attempting to alter table > 'db.table' to 'non_existent_database.table' results in the Thrift error: > {{TException - service has thrown: InvalidOperationException(message=Unable > to change partition or table. Database db does not exist Check metastore logs > for detailed stack.non_existent_database)}} > I believe the offending line of code is > [here|https://github.com/apache/hive/blob/branch-2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java?utf8=%E2%9C%93#L331-L333], > notice that {{dbname}} is used in the message, not {{newDbName}}. I don't > know if switching that
[jira] [Commented] (HIVE-20079) Populate more accurate rawDataSize for parquet format
[ https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533250#comment-16533250 ] Hive QA commented on HIVE-20079: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930200/HIVE-20079.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 66 failed/errored test(s), 14638 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nested_column_pruning] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_analyze] (batchId=23) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_complex_types_vectorization] (batchId=75) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_join] (batchId=20) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_map_type_vectorization] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_no_row_serde] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_struct_type_vectorization] (batchId=27) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_types_non_dictionary_encoding_vectorization] (batchId=89) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_types_vectorization] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_10] (batchId=23) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_11] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_12] (batchId=24) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_13] (batchId=54) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_14] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_15] (batchId=90) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_16] (batchId=85) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_17] (batchId=30) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_1] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_2] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_3] (batchId=80) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_4] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_5] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_6] (batchId=43) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_7] (batchId=88) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_8] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_9] (batchId=31) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_decimal_date] (batchId=31) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_div0] (batchId=80) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_limit] (batchId=25) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_offset_limit] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_part_project] (batchId=37) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_pushdown] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_numeric_overflows] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_parquet_projection] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_parquet_types] (batchId=69) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_partitioned_date_time] (batchId=175) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning] (batchId=184) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_join] (batchId=116) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_0] (batchId=114) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_10] (batchId=117) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_11] (batchId=124) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_12] (batchId=118) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_13] (batchId=131) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_14] (batchId=124) org.apache.hadoop
[jira] [Commented] (HIVE-20079) Populate more accurate rawDataSize for parquet format
[ https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533242#comment-16533242 ] Hive QA commented on HIVE-20079: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 29s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 3 new + 8 unchanged - 3 fixed = 11 total (was 11) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12388/dev-support/hive-personality.sh | | git revision | master / 5e2a530 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12388/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12388/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Populate more accurate rawDataSize for parquet format > - > > Key: HIVE-20079 > URL: https://issues.apache.org/jira/browse/HIVE-20079 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Attachments: HIVE-20079.1.patch > > > Run the following queries and you will see the raw data for the table is 4 > (that is the number of fields) incorrectly. We need to populate correct data > size so data can be split properly. > {noformat} > SET hive.stats.autogather=true; > CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET; > INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1'); > DESC FORMATTED parquet_stats; > {noformat} > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > numFiles1 > numRows 2 > rawDataSize 4 > totalSize 373 > transient_lastDdlTime 1530660523 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19937) Intern JobConf objects in Spark tasks
[ https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533216#comment-16533216 ] Hive QA commented on HIVE-19937: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12928254/HIVE-19937.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14637 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] (batchId=261) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12387/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12387/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12387/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12928254 - PreCommit-HIVE-Build > Intern JobConf objects in Spark tasks > - > > Key: HIVE-19937 > URL: https://issues.apache.org/jira/browse/HIVE-19937 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19937.1.patch, report.html > > > When fixing HIVE-16395, we decided that each new Spark task should clone the > {{JobConf}} object to prevent any {{ConcurrentModificationException}} from > being thrown. However, setting this variable comes at a cost of storing a > duplicate {{JobConf}} object for each Spark task. These objects can take up a > significant amount of memory, we should intern them so that Spark tasks > running in the same JVM don't store duplicate copies. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19937) Intern JobConf objects in Spark tasks
[ https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533175#comment-16533175 ] Hive QA commented on HIVE-19937: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 58s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12387/dev-support/hive-personality.sh | | git revision | master / 5e2a530 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12387/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Intern JobConf objects in Spark tasks > - > > Key: HIVE-19937 > URL: https://issues.apache.org/jira/browse/HIVE-19937 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19937.1.patch, report.html > > > When fixing HIVE-16395, we decided that each new Spark task should clone the > {{JobConf}} object to prevent any {{ConcurrentModificationException}} from > being thrown. However, setting this variable comes at a cost of storing a > duplicate {{JobConf}} object for each Spark task. These objects can take up a > significant amount of memory, we should intern them so that Spark tasks > running in the same JVM don't store duplicate copies. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20060) Refactor HiveSchemaTool and MetastoreSchemaTool
[ https://issues.apache.org/jira/browse/HIVE-20060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533164#comment-16533164 ] Hive QA commented on HIVE-20060: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930192/HIVE-20060.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14626 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12386/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12386/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12386/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12930192 - PreCommit-HIVE-Build > Refactor HiveSchemaTool and MetastoreSchemaTool > --- > > Key: HIVE-20060 > URL: https://issues.apache.org/jira/browse/HIVE-20060 > Project: Hive > Issue Type: Task > Components: Beeline, Metastore >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-20060.patch > > > These two classes are 95% the same. Now that HIVE-19711 has split > HiveSchemaTool into multiple components it will be much easier to refactor > these so that there is only one version of the code that each shares. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20060) Refactor HiveSchemaTool and MetastoreSchemaTool
[ https://issues.apache.org/jira/browse/HIVE-20060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533156#comment-16533156 ] Hive QA commented on HIVE-20060: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 55s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} beeline in master has 56 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 35s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s{color} | {color:red} standalone-metastore: The patch generated 58 new + 67 unchanged - 126 fixed = 125 total (was 193) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} beeline: The patch generated 2 new + 7 unchanged - 18 fixed = 9 total (was 25) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} itests/hive-unit: The patch generated 2 new + 86 unchanged - 81 fixed = 88 total (was 167) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 3s{color} | {color:red} standalone-metastore generated 6 new + 213 unchanged - 15 fixed = 219 total (was 228) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 38s{color} | {color:red} beeline generated 2 new + 51 unchanged - 5 fixed = 53 total (was 56) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:standalone-metastore | | | Found reliance on default encoding in org.apache.hadoop.hive.metastore.tools.MetastoreSchemaTool.execSql(String):in org.apache.hadoop.hive.metastore.tools.MetastoreSchemaTool.execSql(String): new java.io.PrintStream(OutputStream) At MetastoreSchemaTool.java:[line 310] | | | org.apache.hadoop.hive.metastore.tools.SchemaToolTaskAlterCatalog.execute() passes a nonconstant String to an execute or addBatch method on an SQL statement At SchemaToolTaskAlterCatalog.java:to an execute or addBatch method on an SQL statement At SchemaToolTaskAlterCatalog.java:[line 70] | | | Found reliance on default encoding in org.apache.hadoop.hive.metastore.tools.SchemaToolTaskCreateUser.oracleCreateUserHack(File):in org.apache.hadoop.hive.metastore.tools.SchemaToolTaskCreateUser.oracleCreateUserHack(File): new java.io.FileReader(File) At SchemaToolTaskCreateUser.java:[line 89] | | | Found reliance on default encoding in org.apache.hadoop.hive.metast
[jira] [Commented] (HIVE-20085) Druid-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional
[ https://issues.apache.org/jira/browse/HIVE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533140#comment-16533140 ] Ashutosh Chauhan commented on HIVE-20085: - [~nishantbangarwa] Can you please create RB for this? > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > --- > > Key: HIVE-20085 > URL: https://issues.apache.org/jira/browse/HIVE-20085 > Project: Hive > Issue Type: Bug > Components: Hive, StorageHandler >Affects Versions: 3.0.0 >Reporter: Dileep Kumar Chiguruvada >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-20085.1.patch, HIVE-20085.2.patch, HIVE-20085.patch > > > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > {code} > drop table if exists calcs; > create table calcs > STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' > TBLPROPERTIES ( > "druid.segment.granularity" = "MONTH", > "druid.query.granularity" = "DAY") > AS SELECT > cast(datetime0 as timestamp with local time zone) `__time`, > key, > str0, str1, str2, str3, > date0, date1, date2, date3, > time0, time1, > datetime0, datetime1, > zzz, > cast(bool0 as string) bool0, > cast(bool1 as string) bool1, > cast(bool2 as string) bool2, > cast(bool3 as string) bool3, > int0, int1, int2, int3, > num0, num1, num2, num3, num4 > from tableau_orc.calcs; > 2018-07-03 04:57:31,911|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Status: Running > (Executing on YARN cluster with App id application_1530592209763_0009) > ... > ... > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_BYTES_TO_MEM: > 0 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_PHASE_TIME: > 330 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SPILLED_RECORDS: 17 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : OUTPUT_RECORDS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > org.apache.hadoop.hive.llap.counters.LlapWmCounters: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : GUARANTEED_QUEUED_NS: > 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > GUARANTEED_RUNNING_NS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_QUEUED_NS: 2162643606 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_RUNNING_NS: 12151664909 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-2:DEPENDENCY_COLLECTION] in serial mode > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-0:MOVE] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Moving data to > directory > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/calcs > from > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/.hive-staging_hive_2018-07-03_04-57-27_351_7124633902209008283-3/-ext-10002 > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-4:DDL] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|ERROR : FAILED: Execution > Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. > MetaException(message:Table druid_tableau.calcs failed strict managed table > checks due to the following reason: Table is marked as a managed table but is > not transactional.) > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Completed executing > command(queryId=hive_20180703045727_c39c40
[jira] [Commented] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars
[ https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533139#comment-16533139 ] Hive QA commented on HIVE-20077: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930186/HIVE-20077.0.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 14621 tests executed *Failed tests:* {noformat} TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestAutoPurgeTables - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestClearDanglingScratchDir - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestReplicationScenariosAcidTables - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestSemanticAnalyzerHookLoading - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestSparkStatistics - did not produce a TEST-*.xml file (likely timed out) (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12385/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12385/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12385/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12930186 - PreCommit-HIVE-Build > hcat command should follow same pattern as hive cli for getting HBase jars > -- > > Key: HIVE-20077 > URL: https://issues.apache.org/jira/browse/HIVE-20077 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 0.14.0, 2.3.2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Attachments: HIVE-20077.0.patch > > > Currently the {{hcat}} command adds HBase jars to the classpath by using find > to walk the directories under {{$HBASE_HOME/lib}}. > {code} > # Look for HBase in a BigTop-compatible way. Avoid thrift version > # conflict with modern versions of HBase. > HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"} > HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"} > if [ -d ${HBASE_HOME} ] ; then >for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do > HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar} >done >export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}" > fi > if [ -d $HBASE_CONF_DIR ] ; then > HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}" > fi > {code} > This is incorrect as that path contains jars for a mixture of purposes; hbase > client jars, hbase server jars, and hbase shell specific jars. The inclusion > of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. > That release will have HBASE-20615 and HBASE-19735, which will mean most > client facing installations will have a number of shaded client artifacts > present. > With those changes in place, the current implementation will include in the > hcat runtime a mix of shaded and non-shaded hbase artifacts that include some > Hadoop classes rewritten to use a shaded version of protobuf. When these mix > with other Hadoop classes in the classpath that have not been rewritten hcat > will fail with errors that look like this: > {code} > Exception in thread "main" java.lang.ClassCastException: > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto > cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Retr
[jira] [Updated] (HIVE-20091) Tez: Add security credentials for FileSinkOperator output
[ https://issues.apache.org/jira/browse/HIVE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20091: Status: Patch Available (was: Open) > Tez: Add security credentials for FileSinkOperator output > - > > Key: HIVE-20091 > URL: https://issues.apache.org/jira/browse/HIVE-20091 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20091.01.patch > > > DagUtils needs to add security credentials for the output for the > FileSinkOperator. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20091) Tez: Add security credentials for FileSinkOperator output
[ https://issues.apache.org/jira/browse/HIVE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20091: Attachment: HIVE-20091.01.patch > Tez: Add security credentials for FileSinkOperator output > - > > Key: HIVE-20091 > URL: https://issues.apache.org/jira/browse/HIVE-20091 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20091.01.patch > > > DagUtils needs to add security credentials for the output for the > FileSinkOperator. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20091) Tez: Add security credentials for FileSinkOperator output
[ https://issues.apache.org/jira/browse/HIVE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-20091: --- > Tez: Add security credentials for FileSinkOperator output > - > > Key: HIVE-20091 > URL: https://issues.apache.org/jira/browse/HIVE-20091 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > > DagUtils needs to add security credentials for the output for the > FileSinkOperator. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars
[ https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533108#comment-16533108 ] Hive QA commented on HIVE-20077: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 57s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12385/dev-support/hive-personality.sh | | git revision | master / 5e2a530 | | modules | C: hcatalog U: hcatalog | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12385/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > hcat command should follow same pattern as hive cli for getting HBase jars > -- > > Key: HIVE-20077 > URL: https://issues.apache.org/jira/browse/HIVE-20077 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 0.14.0, 2.3.2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Attachments: HIVE-20077.0.patch > > > Currently the {{hcat}} command adds HBase jars to the classpath by using find > to walk the directories under {{$HBASE_HOME/lib}}. > {code} > # Look for HBase in a BigTop-compatible way. Avoid thrift version > # conflict with modern versions of HBase. > HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"} > HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"} > if [ -d ${HBASE_HOME} ] ; then >for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do > HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar} >done >export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}" > fi > if [ -d $HBASE_CONF_DIR ] ; then > HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}" > fi > {code} > This is incorrect as that path contains jars for a mixture of purposes; hbase > client jars, hbase server jars, and hbase shell specific jars. The inclusion > of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. > That release will have HBASE-20615 and HBASE-19735, which will mean most > client facing installations will have a number of shaded client artifacts > present. > With those changes in place, the current implementation will include in the > hcat runtime a mix of shaded and non-shaded hbase artifacts that include some > Hadoop classes rewritten to use a shaded version of protobuf. When these mix > with other Hadoop classes in the classpath that have not been rewritten hcat > will fail with errors that look like this: > {code} > Exception in thread "main" java.lang.ClassCastException: > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto > cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) > at > org.apache.hadoop.io.retry.RetryInvocat
[jira] [Commented] (HIVE-18952) Tez session disconnect and reconnect on HS2 HA failover
[ https://issues.apache.org/jira/browse/HIVE-18952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533096#comment-16533096 ] Hive QA commented on HIVE-18952: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930174/HIVE-18952.03.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12383/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12383/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12383/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-07-04 23:08:50.148 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12383/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-07-04 23:08:50.151 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 5e2a530 HIVE-20066 : hive.load.data.owner is compared to full principal (Daniel Voros via Zoltan Haindrich) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 5e2a530 HIVE-20066 : hive.load.data.owner is compared to full principal (Daniel Voros via Zoltan Haindrich) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-07-04 23:08:50.671 + rm -rf ../yetus_PreCommit-HIVE-Build-12383 + mkdir ../yetus_PreCommit-HIVE-Build-12383 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12383 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12383/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch Going to apply patch with: git apply -p0 /data/hiveptest/working/scratch/build.patch:411: trailing whitespace. if (isRecovery && initialSize <= pool.size()) { /data/hiveptest/working/scratch/build.patch:939: trailing whitespace. warning: 2 lines add whitespace errors. + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: executing: [/tmp/protoc4724369164759830589.exe, --version] libprotoc 2.5.0 protoc-jar: executing: [/tmp/protoc4724369164759830589.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g log4j:WARN No appenders could be found for logger (DataNucleus.Persistence). log4j:WARN Please initialize the log4j system properly. DataNucleus Enhancer (version 4.1.17) for API "JDO" DataNucleus Enhancer completed with success for 41 classes. ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveLexer.java does not exist: must build /data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g org/apache
[jira] [Commented] (HIVE-20039) Bucket pruning: Left Outer Join on bucketed table gives wrong result
[ https://issues.apache.org/jira/browse/HIVE-20039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533097#comment-16533097 ] Hive QA commented on HIVE-20039: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930204/HIVE-20039.01-branch-3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12384/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12384/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12384/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12930204/HIVE-20039.01-branch-3.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12930204 - PreCommit-HIVE-Build > Bucket pruning: Left Outer Join on bucketed table gives wrong result > - > > Key: HIVE-20039 > URL: https://issues.apache.org/jira/browse/HIVE-20039 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0, 2.3.2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-20039.01-branch-3.patch, HIVE-20039.1.patch, > HIVE-20039.2.patch, HIVE-20039.3.patch, HIVE-20039.4.patch > > > Left outer join on bucketed table on certain cases gives wrong results. > Depending on the order in which the table-scans are walked through, the > FilterPruner might end up using the wrong table scan's table properties on > the other table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20076) Delete on a partitioned table removes more rows than expected
[ https://issues.apache.org/jira/browse/HIVE-20076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533095#comment-16533095 ] Hive QA commented on HIVE-20076: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930244/HIVE-20076.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12382/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12382/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12382/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-07-04 23:07:42.568 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12382/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-07-04 23:07:42.572 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive bb35d83..d7128cf branch-3 -> origin/branch-3 + git reset --hard HEAD HEAD is now at 5e2a530 HIVE-20066 : hive.load.data.owner is compared to full principal (Daniel Voros via Zoltan Haindrich) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 5e2a530 HIVE-20066 : hive.load.data.owner is compared to full principal (Daniel Voros via Zoltan Haindrich) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-07-04 23:07:44.249 + rm -rf ../yetus_PreCommit-HIVE-Build-12382 + mkdir ../yetus_PreCommit-HIVE-Build-12382 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12382 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12382/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: cannot apply binary patch to 'data/files/student/ds=20110924/00_0' without full index line Falling back to three-way merge... error: cannot apply binary patch to 'data/files/student/ds=20110924/00_0' without full index line error: data/files/student/ds=20110924/00_0: patch does not apply error: cannot apply binary patch to 'data/files/student/ds=20110925/00_0' without full index line Falling back to three-way merge... error: cannot apply binary patch to 'data/files/student/ds=20110925/00_0' without full index line error: data/files/student/ds=20110925/00_0: patch does not apply error: cannot apply binary patch to 'data/files/student/ds=20110926/00_0' without full index line Falling back to three-way merge... error: cannot apply binary patch to 'data/files/student/ds=20110926/00_0' without full index line error: data/files/student/ds=20110926/00_0: patch does not apply error: cannot apply binary patch to 'files/student/ds=20110924/00_0' without full index line Falling back to three-way merge... error: cannot apply binary patch to 'files/student/ds=20110924/00_0' without full index line error: files/student/ds=20110924/00_0: patch does not apply error: cannot apply binary patch to 'files/student/ds=20110925/00_0' without full index line Falling back to three-way merge... error: cannot apply binary patch to 'files/student/ds=20110925/00_0' without full index line error: files/student/ds=20110925/00_0: patch does not apply error: cannot apply binary patch to 'files/student/ds=20110926/00_0' without full index line Falling back to three-way merge... error: cannot apply binary patch to 'files/student/ds=20110926/00_0' without full index line error: files/student/ds=20110926/00_0: patch does not apply error: src/test/resources/testconfiguration.properties: does not exist in index err
[jira] [Commented] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file
[ https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533094#comment-16533094 ] Hive QA commented on HIVE-18118: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930164/HIVE-18118.15.patch {color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14640 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12381/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12381/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12381/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12930164 - PreCommit-HIVE-Build > Explain Extended should indicate if a file being read is an EC file > --- > > Key: HIVE-18118 > URL: https://issues.apache.org/jira/browse/HIVE-18118 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, > HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, > HIVE-18118.12.patch, HIVE-18118.14.patch, HIVE-18118.15.patch, > HIVE-18118.2.patch, HIVE-18118.3.patch, HIVE-18118.4.patch, > HIVE-18118.5.patch, HIVE-18118.6.patch, HIVE-18118.7.patch, > HIVE-18118.8.patch, HIVE-18118.9.patch > > > We already print out the files Hive will read in the explain extended > command, we just have to modify it to say whether or not its an EC file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file
[ https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533084#comment-16533084 ] Hive QA commented on HIVE-18118: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 54s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 58s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 34s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 43s{color} | {color:blue} itests/util in master has 52 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 33s{color} | {color:red} hive-unit in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} common: The patch generated 0 new + 96 unchanged - 2 fixed = 96 total (was 98) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} standalone-metastore: The patch generated 0 new + 455 unchanged - 12 fixed = 455 total (was 467) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} ql: The patch generated 0 new + 499 unchanged - 35 fixed = 499 total (was 534) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} The patch hive-unit passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} The patch util passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s{color} | {color:green} standalone-metastore generated 0 new + 226 unchanged - 2 fixed = 226 total (was 228) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 58s{color} | {color:green} ql in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} hive-unit in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} util in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 0s{color} | {color:blac
[jira] [Commented] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533077#comment-16533077 ] Hive QA commented on HIVE-19990: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930165/HIVE-19990.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 14560 tests executed *Failed tests:* {noformat} TestBeeLineExceptionHandling - did not produce a TEST-*.xml file (likely timed out) (batchId=197) TestBeeLineHistory - did not produce a TEST-*.xml file (likely timed out) (batchId=197) TestBeelineArgParsing - did not produce a TEST-*.xml file (likely timed out) (batchId=197) TestClientCommandHookFactory - did not produce a TEST-*.xml file (likely timed out) (batchId=197) TestHiveCli - did not produce a TEST-*.xml file (likely timed out) (batchId=197) TestHiveSchemaTool - did not produce a TEST-*.xml file (likely timed out) (batchId=197) TestIncrementalRows - did not produce a TEST-*.xml file (likely timed out) (batchId=197) TestShutdownHook - did not produce a TEST-*.xml file (likely timed out) (batchId=197) TestTableOutputFormat - did not produce a TEST-*.xml file (likely timed out) (batchId=197) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12380/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12380/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12380/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12930165 - PreCommit-HIVE-Build > Query with interval literal in join condition fails > --- > > Key: HIVE-19990 > URL: https://issues.apache.org/jira/browse/HIVE-19990 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19990.1.patch, HIVE-19990.2.patch, > HIVE-19990.3.patch > > > *Reproducer* > {code:sql} > > create table date_dim_d1( > d_week_seqint, > d_datestring); > > SELECT >d1.d_week_seq > FROM >date_dim_d1 d1 >JOIN date_dim_d1 d3 > WHERE >Cast(d3.d_date AS date) > Cast(d1.d_date AS date) + INTERVAL '5' day ; > {code} > *Exception* > {code} > org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' > encountered with 0 children > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11285) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12071) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:593) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12150) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:288) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:658) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1776) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1771) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apac
[jira] [Updated] (HIVE-20073) Additional tests for to_utc_timestamp function based on HIVE-20068
[ https://issues.apache.org/jira/browse/HIVE-20073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-20073: --- Attachment: HIVE-20073.01.patch > Additional tests for to_utc_timestamp function based on HIVE-20068 > -- > > Key: HIVE-20073 > URL: https://issues.apache.org/jira/browse/HIVE-20073 > Project: Hive > Issue Type: Bug > Environment: MapR running on Linux I believe. Client is DBeaver on > Windows 7. >Reporter: JAMES J STEINBUGL >Assignee: Jesus Camacho Rodriguez >Priority: Minor > Attachments: HIVE-20073.01.patch, HIVE-20073.patch, > image-2018-07-03-08-50-42-390.png > > > I have the following script and I'm at loss to explain the behavior. > Possibly it's an older bug as we are using the 2.1.1 drivers (?). We noticed > this issue when converting from US/Eastern into UTC and then back to > US/Eastern. Everything that was in Status Date / Status Hour on 3/11/17 > 21:00:00 shifted 6 hours ahead into UTC ... then shifted back to 3/11/17 > 22:00:00 back in US/Eastern. The behavior appears to be the same using the > constant EST5EDT. EDT was effective on 3/12 2 am, so the issue appears only > at this boundary condition when we "spring ahead", but it at least on the > surface seems incorrect. > -- > -- Potential Issue with to_utc_timestamp > --- > SELECT '2017-03-11 18:00:00', to_utc_timestamp(timestamp '2017-03-11 > 18:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 19:00:00', to_utc_timestamp(timestamp '2017-03-11 > 19:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 20:00:00', to_utc_timestamp(timestamp '2017-03-11 > 20:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > {color:#FF}SELECT '2017-03-11 21:00:00', to_utc_timestamp(timestamp > '2017-03-11 21:00:00','US/Eastern'); -- Shifts ahead 6 hours (???){color} > {color:#FF}_c0 _c1 > 2017-03-11 21:00:00 2017-03-12 03:00:00{color} > SELECT '2017-03-11 22:00:00', to_utc_timestamp(timestamp '2017-03-11 > 22:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 23:00:00', to_utc_timestamp(timestamp '2017-03-11 > 23:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 00:00:00', to_utc_timestamp(timestamp '2017-03-12 > 00:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 01:00:00', to_utc_timestamp(timestamp '2017-03-12 > 01:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 02:00:00', to_utc_timestamp(timestamp '2017-03-12 > 02:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 03:00:00', to_utc_timestamp(timestamp '2017-03-12 > 03:00:00','US/Eastern'); -- Shifts ahead 4 hours as expected > SELECT '2017-03-12 04:00:00', to_utc_timestamp(timestamp '2017-03-12 > 04:00:00','US/Eastern'); -- Shifts ahead 4 hours as expected > SELECT '2017-03-12 05:00:00', to_utc_timestamp(timestamp '2017-03-12 > 05:00:00','US/Eastern'); -- Shifts ahead 4 hours as expected > !image-2018-07-03-08-50-42-390.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20073) Additional tests for to_utc_timestamp function based on HIVE-20068
[ https://issues.apache.org/jira/browse/HIVE-20073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-20073: -- Assignee: Jesus Camacho Rodriguez (was: JAMES J STEINBUGL) > Additional tests for to_utc_timestamp function based on HIVE-20068 > -- > > Key: HIVE-20073 > URL: https://issues.apache.org/jira/browse/HIVE-20073 > Project: Hive > Issue Type: Bug > Environment: MapR running on Linux I believe. Client is DBeaver on > Windows 7. >Reporter: JAMES J STEINBUGL >Assignee: Jesus Camacho Rodriguez >Priority: Minor > Attachments: HIVE-20073.patch, image-2018-07-03-08-50-42-390.png > > > I have the following script and I'm at loss to explain the behavior. > Possibly it's an older bug as we are using the 2.1.1 drivers (?). We noticed > this issue when converting from US/Eastern into UTC and then back to > US/Eastern. Everything that was in Status Date / Status Hour on 3/11/17 > 21:00:00 shifted 6 hours ahead into UTC ... then shifted back to 3/11/17 > 22:00:00 back in US/Eastern. The behavior appears to be the same using the > constant EST5EDT. EDT was effective on 3/12 2 am, so the issue appears only > at this boundary condition when we "spring ahead", but it at least on the > surface seems incorrect. > -- > -- Potential Issue with to_utc_timestamp > --- > SELECT '2017-03-11 18:00:00', to_utc_timestamp(timestamp '2017-03-11 > 18:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 19:00:00', to_utc_timestamp(timestamp '2017-03-11 > 19:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 20:00:00', to_utc_timestamp(timestamp '2017-03-11 > 20:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > {color:#FF}SELECT '2017-03-11 21:00:00', to_utc_timestamp(timestamp > '2017-03-11 21:00:00','US/Eastern'); -- Shifts ahead 6 hours (???){color} > {color:#FF}_c0 _c1 > 2017-03-11 21:00:00 2017-03-12 03:00:00{color} > SELECT '2017-03-11 22:00:00', to_utc_timestamp(timestamp '2017-03-11 > 22:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 23:00:00', to_utc_timestamp(timestamp '2017-03-11 > 23:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 00:00:00', to_utc_timestamp(timestamp '2017-03-12 > 00:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 01:00:00', to_utc_timestamp(timestamp '2017-03-12 > 01:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 02:00:00', to_utc_timestamp(timestamp '2017-03-12 > 02:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 03:00:00', to_utc_timestamp(timestamp '2017-03-12 > 03:00:00','US/Eastern'); -- Shifts ahead 4 hours as expected > SELECT '2017-03-12 04:00:00', to_utc_timestamp(timestamp '2017-03-12 > 04:00:00','US/Eastern'); -- Shifts ahead 4 hours as expected > SELECT '2017-03-12 05:00:00', to_utc_timestamp(timestamp '2017-03-12 > 05:00:00','US/Eastern'); -- Shifts ahead 4 hours as expected > !image-2018-07-03-08-50-42-390.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities
[ https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-20090: --- Attachment: (was: HIVE-20090.patch) > Extend creation of semijoin reduction filters to be able to discover new > opportunities > -- > > Key: HIVE-20090 > URL: https://issues.apache.org/jira/browse/HIVE-20090 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-20090.01.patch > > > Assume the following plan: > {noformat} > TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9] > TS[2] - RS[3] - JOIN[4] > TS[6] - RS[7] - JOIN[8] > {noformat} > Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, > i.e., input to join between both subplans. > However, it may be useful to consider other possibilities too, e.g., reduced > by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important > when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would > create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not. > This patch comprises two parts. First, it creates additional predicates when > possible. Secondly, it removes duplicate semijoin reduction > branches/predicates, e.g., if another semijoin that consumes the output of > the same expression already reduces a certain table scan operator (heuristic, > since this may not result in most efficient plan in all cases). Ultimately, > the decision on whether to use one or another should be cost-driven > (follow-up). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities
[ https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-20090: --- Attachment: HIVE-20090.01.patch > Extend creation of semijoin reduction filters to be able to discover new > opportunities > -- > > Key: HIVE-20090 > URL: https://issues.apache.org/jira/browse/HIVE-20090 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-20090.01.patch > > > Assume the following plan: > {noformat} > TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9] > TS[2] - RS[3] - JOIN[4] > TS[6] - RS[7] - JOIN[8] > {noformat} > Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, > i.e., input to join between both subplans. > However, it may be useful to consider other possibilities too, e.g., reduced > by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important > when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would > create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not. > This patch comprises two parts. First, it creates additional predicates when > possible. Secondly, it removes duplicate semijoin reduction > branches/predicates, e.g., if another semijoin that consumes the output of > the same expression already reduces a certain table scan operator (heuristic, > since this may not result in most efficient plan in all cases). Ultimately, > the decision on whether to use one or another should be cost-driven > (follow-up). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533060#comment-16533060 ] Hive QA commented on HIVE-19990: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 38s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 9 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12380/dev-support/hive-personality.sh | | git revision | master / 5e2a530 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-12380/yetus/whitespace-tabs.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12380/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Query with interval literal in join condition fails > --- > > Key: HIVE-19990 > URL: https://issues.apache.org/jira/browse/HIVE-19990 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19990.1.patch, HIVE-19990.2.patch, > HIVE-19990.3.patch > > > *Reproducer* > {code:sql} > > create table date_dim_d1( > d_week_seqint, > d_datestring); > > SELECT >d1.d_week_seq > FROM >date_dim_d1 d1 >JOIN date_dim_d1 d3 > WHERE >Cast(d3.d_date AS date) > Cast(d1.d_date AS date) + INTERVAL '5' day ; > {code} > *Exception* > {code} > org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' > encountered with 0 children > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:1138
[jira] [Updated] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities
[ https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-20090: --- Attachment: HIVE-20090.patch > Extend creation of semijoin reduction filters to be able to discover new > opportunities > -- > > Key: HIVE-20090 > URL: https://issues.apache.org/jira/browse/HIVE-20090 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-20090.patch > > > Assume the following plan: > {noformat} > TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9] > TS[2] - RS[3] - JOIN[4] > TS[6] - RS[7] - JOIN[8] > {noformat} > Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, > i.e., input to join between both subplans. > However, it may be useful to consider other possibilities too, e.g., reduced > by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important > when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would > create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not. > This patch comprises two parts. First, it creates additional predicates when > possible. Secondly, it removes duplicate semijoin reduction > branches/predicates, e.g., if another semijoin that consumes the output of > the same expression already reduces a certain table scan operator (heuristic, > since this may not result in most efficient plan in all cases). Ultimately, > the decision on whether to use one or another should be cost-driven > (follow-up). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities
[ https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-20090: --- Status: Patch Available (was: In Progress) > Extend creation of semijoin reduction filters to be able to discover new > opportunities > -- > > Key: HIVE-20090 > URL: https://issues.apache.org/jira/browse/HIVE-20090 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-20090.patch > > > Assume the following plan: > {noformat} > TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9] > TS[2] - RS[3] - JOIN[4] > TS[6] - RS[7] - JOIN[8] > {noformat} > Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, > i.e., input to join between both subplans. > However, it may be useful to consider other possibilities too, e.g., reduced > by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important > when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would > create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not. > This patch comprises two parts. First, it creates additional predicates when > possible. Secondly, it removes duplicate semijoin reduction > branches/predicates, e.g., if another semijoin that consumes the output of > the same expression already reduces a certain table scan operator (heuristic, > since this may not result in most efficient plan in all cases). Ultimately, > the decision on whether to use one or another should be cost-driven > (follow-up). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities
[ https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-20090: -- > Extend creation of semijoin reduction filters to be able to discover new > opportunities > -- > > Key: HIVE-20090 > URL: https://issues.apache.org/jira/browse/HIVE-20090 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > > Assume the following plan: > {noformat} > TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9] > TS[2] - RS[3] - JOIN[4] > TS[6] - RS[7] - JOIN[8] > {noformat} > Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, > i.e., input to join between both subplans. > However, it may be useful to consider other possibilities too, e.g., reduced > by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important > when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would > create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not. > This patch comprises two parts. First, it creates additional predicates when > possible. Secondly, it removes duplicate semijoin reduction > branches/predicates, e.g., if another semijoin that consumes the output of > the same expression already reduces a certain table scan operator (heuristic, > since this may not result in most efficient plan in all cases). Ultimately, > the decision on whether to use one or another should be cost-driven > (follow-up). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities
[ https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-20090 started by Jesus Camacho Rodriguez. -- > Extend creation of semijoin reduction filters to be able to discover new > opportunities > -- > > Key: HIVE-20090 > URL: https://issues.apache.org/jira/browse/HIVE-20090 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > > Assume the following plan: > {noformat} > TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9] > TS[2] - RS[3] - JOIN[4] > TS[6] - RS[7] - JOIN[8] > {noformat} > Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, > i.e., input to join between both subplans. > However, it may be useful to consider other possibilities too, e.g., reduced > by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important > when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would > create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not. > This patch comprises two parts. First, it creates additional predicates when > possible. Secondly, it removes duplicate semijoin reduction > branches/predicates, e.g., if another semijoin that consumes the output of > the same expression already reduces a certain table scan operator (heuristic, > since this may not result in most efficient plan in all cases). Ultimately, > the decision on whether to use one or another should be cost-driven > (follow-up). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19806) Several tests do not properly sort their output
[ https://issues.apache.org/jira/browse/HIVE-19806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533048#comment-16533048 ] Hive QA commented on HIVE-19806: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930168/HIVE-19806.3.patch {color:green}SUCCESS:{color} +1 due to 16 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14637 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_smb_reduce_side] (batchId=54) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12379/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12379/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12379/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12930168 - PreCommit-HIVE-Build > Several tests do not properly sort their output > --- > > Key: HIVE-19806 > URL: https://issues.apache.org/jira/browse/HIVE-19806 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19806.2.patch, HIVE-19806.3.patch, HIVE-19806.patch > > > A number of the tests produce unsorted output that happens to come out the > same on people's laptops and the ptest infrastructure. But when run on a > separate linux box the sort differences show up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19806) Several tests do not properly sort their output
[ https://issues.apache.org/jira/browse/HIVE-19806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533046#comment-16533046 ] Hive QA commented on HIVE-19806: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 8s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 56s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 40s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12379/dev-support/hive-personality.sh | | git revision | master / 5e2a530 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql . itests itests/hive-blobstore itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12379/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Several tests do not properly sort their output > --- > > Key: HIVE-19806 > URL: https://issues.apache.org/jira/browse/HIVE-19806 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19806.2.patch, HIVE-19806.3.patch, HIVE-19806.patch > > > A number of the tests produce unsorted output that happens to come out the > same on people's laptops and the ptest infrastructure. But when run on a > separate linux box the sort differences show up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19765) Add Parquet specific tests to BlobstoreCliDriver
[ https://issues.apache.org/jira/browse/HIVE-19765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533030#comment-16533030 ] Sahil Takiar commented on HIVE-19765: - [~kuczoram], [~pvary] could you review? > Add Parquet specific tests to BlobstoreCliDriver > > > Key: HIVE-19765 > URL: https://issues.apache.org/jira/browse/HIVE-19765 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19765.1.patch, HIVE-19765.2.patch, > HIVE-19765.3.patch, HIVE-19765.4.patch > > > Similar to what was done for RC and ORC files. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20039) Bucket pruning: Left Outer Join on bucketed table gives wrong result
[ https://issues.apache.org/jira/browse/HIVE-20039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533021#comment-16533021 ] Hive QA commented on HIVE-20039: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930204/HIVE-20039.01-branch-3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 14369 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=257) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=257) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=257) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=257) TestReplicationScenariosAcrossInstances - did not produce a TEST-*.xml file (likely timed out) (batchId=234) TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=257) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[convert_decimal64_to_decimal] (batchId=51) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[convert_decimal64_to_decimal] (batchId=169) org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning (batchId=315) org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testManagedPaths (batchId=234) org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.testImpersonation (batchId=243) org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=309) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12378/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12378/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12378/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12930204 - PreCommit-HIVE-Build > Bucket pruning: Left Outer Join on bucketed table gives wrong result > - > > Key: HIVE-20039 > URL: https://issues.apache.org/jira/browse/HIVE-20039 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0, 2.3.2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-20039.01-branch-3.patch, HIVE-20039.1.patch, > HIVE-20039.2.patch, HIVE-20039.3.patch, HIVE-20039.4.patch > > > Left outer join on bucketed table on certain cases gives wrong results. > Depending on the order in which the table-scans are walked through, the > FilterPruner might end up using the wrong table scan's table properties on > the other table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20066) hive.load.data.owner is compared to full principal
[ https://issues.apache.org/jira/browse/HIVE-20066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-20066: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Daniel! > hive.load.data.owner is compared to full principal > -- > > Key: HIVE-20066 > URL: https://issues.apache.org/jira/browse/HIVE-20066 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20066.1.patch > > > HIVE-19928 compares the user running HS2 to the configured owner > (hive.load.data.owner) to check if we're able to move the file with LOAD DATA > or need to copy. > This check compares the full username (that may contain the full kerberos > principal) to hive.load.data.owner. We should compare to the short username > ({{UGI.getShortUserName()}}) instead. That's used in similar context > [here|https://github.com/apache/hive/blob/f519db7eafacb4b4d2d9fe2a9e10e908d8077224/common/src/java/org/apache/hadoop/hive/common/FileUtils.java#L398]. > cc [~djaiswal] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20039) Bucket pruning: Left Outer Join on bucketed table gives wrong result
[ https://issues.apache.org/jira/browse/HIVE-20039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532987#comment-16532987 ] Hive QA commented on HIVE-20039: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 11s{color} | {color:red} /data/hiveptest/logs/PreCommit-HIVE-Build-12378/patches/PreCommit-HIVE-Build-12378.patch does not apply to master. Rebase required? Wrong Branch? See http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12378/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Bucket pruning: Left Outer Join on bucketed table gives wrong result > - > > Key: HIVE-20039 > URL: https://issues.apache.org/jira/browse/HIVE-20039 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0, 2.3.2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-20039.01-branch-3.patch, HIVE-20039.1.patch, > HIVE-20039.2.patch, HIVE-20039.3.patch, HIVE-20039.4.patch > > > Left outer join on bucketed table on certain cases gives wrong results. > Depending on the order in which the table-scans are walked through, the > FilterPruner might end up using the wrong table scan's table properties on > the other table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20069) Fix reoptimization in case of DPP
[ https://issues.apache.org/jira/browse/HIVE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532984#comment-16532984 ] Hive QA commented on HIVE-20069: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930156/HIVE-20069.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 14622 tests executed *Failed tests:* {noformat} TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestAutoPurgeTables - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestClearDanglingScratchDir - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestReplicationScenariosAcidTables - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestSemanticAnalyzerHookLoading - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestSparkStatistics - did not produce a TEST-*.xml file (likely timed out) (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[reopt_dpp] (batchId=75) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_partition_pruning] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_dynamic_partition_pruning] (batchId=162) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning] (batchId=184) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_vectorized_dynamic_partition_pruning] (batchId=184) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12377/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12377/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12377/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12930156 - PreCommit-HIVE-Build > Fix reoptimization in case of DPP > - > > Key: HIVE-20069 > URL: https://issues.apache.org/jira/browse/HIVE-20069 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20069.01.patch > > > reported by [~t3rmin4t0r] > In case dynamic partition pruning; the operator statistics became partial; to > only reflect the actually scanned partitions; but they are being used as an > information about the "full" table...which leads to the exchange of the 2 > tables being joined...which is really unfortunate... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20088) Beeline config location path is assembled incorrectly
[ https://issues.apache.org/jira/browse/HIVE-20088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532972#comment-16532972 ] Zoltan Haindrich commented on HIVE-20088: - +1 pending tests > Beeline config location path is assembled incorrectly > - > > Key: HIVE-20088 > URL: https://issues.apache.org/jira/browse/HIVE-20088 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 3.0.0 >Reporter: Denes Bodo >Assignee: Denes Bodo >Priority: Critical > Labels: easyfix, usability > Attachments: HIVE-20088_001.patch > > > Checking the code in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/UserHS2ConnectionFileParser.java] > or in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/BeelineSiteParser.java] > I see {code}locations.add(ETC_HIVE_CONF_LOCATION + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > whee file separator shall be used: > {code}locations.add(ETC_HIVE_CONF_LOCATION + File.separator + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > Due to this, BeeLine cannot use configuration in case if this would be the > only way. > In my hadoop-3 setup, the locations list contains the following: > {code} > /home/myuser/.beeline/beeline-site.xml > /etc/hive/confbeeline-site.xml > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20085) Druid-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional
[ https://issues.apache.org/jira/browse/HIVE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-20085: Attachment: HIVE-20085.2.patch > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > --- > > Key: HIVE-20085 > URL: https://issues.apache.org/jira/browse/HIVE-20085 > Project: Hive > Issue Type: Bug > Components: Hive, StorageHandler >Affects Versions: 3.0.0 >Reporter: Dileep Kumar Chiguruvada >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-20085.1.patch, HIVE-20085.2.patch, HIVE-20085.patch > > > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > {code} > drop table if exists calcs; > create table calcs > STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' > TBLPROPERTIES ( > "druid.segment.granularity" = "MONTH", > "druid.query.granularity" = "DAY") > AS SELECT > cast(datetime0 as timestamp with local time zone) `__time`, > key, > str0, str1, str2, str3, > date0, date1, date2, date3, > time0, time1, > datetime0, datetime1, > zzz, > cast(bool0 as string) bool0, > cast(bool1 as string) bool1, > cast(bool2 as string) bool2, > cast(bool3 as string) bool3, > int0, int1, int2, int3, > num0, num1, num2, num3, num4 > from tableau_orc.calcs; > 2018-07-03 04:57:31,911|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Status: Running > (Executing on YARN cluster with App id application_1530592209763_0009) > ... > ... > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_BYTES_TO_MEM: > 0 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_PHASE_TIME: > 330 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SPILLED_RECORDS: 17 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : OUTPUT_RECORDS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > org.apache.hadoop.hive.llap.counters.LlapWmCounters: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : GUARANTEED_QUEUED_NS: > 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > GUARANTEED_RUNNING_NS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_QUEUED_NS: 2162643606 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_RUNNING_NS: 12151664909 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-2:DEPENDENCY_COLLECTION] in serial mode > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-0:MOVE] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Moving data to > directory > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/calcs > from > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/.hive-staging_hive_2018-07-03_04-57-27_351_7124633902209008283-3/-ext-10002 > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-4:DDL] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|ERROR : FAILED: Execution > Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. > MetaException(message:Table druid_tableau.calcs failed strict managed table > checks due to the following reason: Table is marked as a managed table but is > not transactional.) > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Completed executing > command(queryId=hive_20180703045727_c39c40d2-7d4a-46c7-a36d-7925e7c4a788); > Time taken: 6.794 seconds > 2018-07-03 04
[jira] [Commented] (HIVE-20069) Fix reoptimization in case of DPP
[ https://issues.apache.org/jira/browse/HIVE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532955#comment-16532955 ] Hive QA commented on HIVE-20069: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 57s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s{color} | {color:red} ql: The patch generated 1 new + 11 unchanged - 0 fixed = 12 total (was 11) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 13 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12377/dev-support/hive-personality.sh | | git revision | master / 3b6d4e2 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12377/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-12377/yetus/whitespace-eol.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-12377/yetus/whitespace-tabs.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12377/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix reoptimization in case of DPP > - > > Key: HIVE-20069 > URL: https://issues.apache.org/jira/browse/HIVE-20069 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20069.01.patch > > > reported by [~t3rmin4t0r] > In case dynamic partition pruning; the operator statistics became partial; to > only reflect the actually scanned partitions; but they are being used as an > information about the "full" table...which leads to the exchange of the 2 > tables being joined...which is really unfortunate... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19891) inserting into external tables with custom partition directories may cause data loss
[ https://issues.apache.org/jira/browse/HIVE-19891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532932#comment-16532932 ] Hive QA commented on HIVE-19891: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930150/HIVE-19891.04.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14638 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12376/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12376/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12376/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12930150 - PreCommit-HIVE-Build > inserting into external tables with custom partition directories may cause > data loss > > > Key: HIVE-19891 > URL: https://issues.apache.org/jira/browse/HIVE-19891 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19891.01.patch, HIVE-19891.02.patch, > HIVE-19891.03.patch, HIVE-19891.04.patch, HIVE-19891.patch > > > tbl1 is just used as a prop to create data, could be an existing directory > for an external table. > Due to weird behavior of LoadTableDesc (some ancient code for overriding old > partition path), custom partition path is overwritten after the query and the > data in it ceases being a part of the table (can be seen in desc formatted > output with masking commented out in QTestUtil) > This affects branch-1 too, so it's pretty old. > {noformat}drop table tbl1; > CREATE TABLE tbl1 (index int, value int ) PARTITIONED BY ( created_date > string ); > insert into tbl1 partition(created_date='2018-02-01') VALUES (2, 2); > CREATE external TABLE tbl2 (index int, value int ) PARTITIONED BY ( > created_date string ); > ALTER TABLE tbl2 ADD PARTITION(created_date='2018-02-01'); > ALTER TABLE tbl2 PARTITION(created_date='2018-02-01') SET LOCATION > 'file:/Users/sergey/git/hivegit/itests/qtest/target/warehouse/tbl1/created_date=2018-02-01'; > select * from tbl2; > describe formatted tbl2 partition(created_date='2018-02-01'); > insert into tbl2 partition(created_date='2018-02-01') VALUES (1, 1); > select * from tbl2; > describe formatted tbl2 partition(created_date='2018-02-01'); > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20085) Druid-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional
[ https://issues.apache.org/jira/browse/HIVE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-20085: Attachment: HIVE-20085.1.patch > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > --- > > Key: HIVE-20085 > URL: https://issues.apache.org/jira/browse/HIVE-20085 > Project: Hive > Issue Type: Bug > Components: Hive, StorageHandler >Affects Versions: 3.0.0 >Reporter: Dileep Kumar Chiguruvada >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-20085.1.patch, HIVE-20085.patch > > > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > {code} > drop table if exists calcs; > create table calcs > STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' > TBLPROPERTIES ( > "druid.segment.granularity" = "MONTH", > "druid.query.granularity" = "DAY") > AS SELECT > cast(datetime0 as timestamp with local time zone) `__time`, > key, > str0, str1, str2, str3, > date0, date1, date2, date3, > time0, time1, > datetime0, datetime1, > zzz, > cast(bool0 as string) bool0, > cast(bool1 as string) bool1, > cast(bool2 as string) bool2, > cast(bool3 as string) bool3, > int0, int1, int2, int3, > num0, num1, num2, num3, num4 > from tableau_orc.calcs; > 2018-07-03 04:57:31,911|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Status: Running > (Executing on YARN cluster with App id application_1530592209763_0009) > ... > ... > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_BYTES_TO_MEM: > 0 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_PHASE_TIME: > 330 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SPILLED_RECORDS: 17 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : OUTPUT_RECORDS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > org.apache.hadoop.hive.llap.counters.LlapWmCounters: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : GUARANTEED_QUEUED_NS: > 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > GUARANTEED_RUNNING_NS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_QUEUED_NS: 2162643606 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_RUNNING_NS: 12151664909 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-2:DEPENDENCY_COLLECTION] in serial mode > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-0:MOVE] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Moving data to > directory > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/calcs > from > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/.hive-staging_hive_2018-07-03_04-57-27_351_7124633902209008283-3/-ext-10002 > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-4:DDL] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|ERROR : FAILED: Execution > Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. > MetaException(message:Table druid_tableau.calcs failed strict managed table > checks due to the following reason: Table is marked as a managed table but is > not transactional.) > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Completed executing > command(queryId=hive_20180703045727_c39c40d2-7d4a-46c7-a36d-7925e7c4a788); > Time taken: 6.794 seconds > 2018-07-03 04:57:36,337|INFO|Thre
[jira] [Commented] (HIVE-19891) inserting into external tables with custom partition directories may cause data loss
[ https://issues.apache.org/jira/browse/HIVE-19891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532897#comment-16532897 ] Hive QA commented on HIVE-19891: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 7s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 3 new + 495 unchanged - 0 fixed = 498 total (was 495) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12376/dev-support/hive-personality.sh | | git revision | master / 3b6d4e2 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12376/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-12376/yetus/whitespace-eol.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12376/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > inserting into external tables with custom partition directories may cause > data loss > > > Key: HIVE-19891 > URL: https://issues.apache.org/jira/browse/HIVE-19891 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19891.01.patch, HIVE-19891.02.patch, > HIVE-19891.03.patch, HIVE-19891.04.patch, HIVE-19891.patch > > > tbl1 is just used as a prop to create data, could be an existing directory > for an external table. > Due to weird behavior of LoadTableDesc (some ancient code for overriding old > partition path), custom partition path is overwritten after the query and the > data in it ceases being a part of the table (can be seen in desc formatted > output with masking commented out in QTestUtil) > This affects branch-1 too, so it's pretty old. > {noformat}drop table tbl1; > CREATE TABLE tbl1 (index int, value int ) PARTITIONED BY ( created_date > string ); > insert into tbl1 partition(created_date='2018-02-01') VALUES (2, 2); > CREATE external TABLE tbl2 (index int, value int ) PARTITIONED BY ( > created_date string ); > ALTER TABLE tbl2 ADD PARTITION(created_date='2018-02-01'); > ALT
[jira] [Commented] (HIVE-20073) Additional tests for to_utc_timestamp function based on HIVE-20068
[ https://issues.apache.org/jira/browse/HIVE-20073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532863#comment-16532863 ] Hive QA commented on HIVE-20073: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930149/HIVE-20073.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14638 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join1] (batchId=185) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12375/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12375/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12375/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12930149 - PreCommit-HIVE-Build > Additional tests for to_utc_timestamp function based on HIVE-20068 > -- > > Key: HIVE-20073 > URL: https://issues.apache.org/jira/browse/HIVE-20073 > Project: Hive > Issue Type: Bug > Environment: MapR running on Linux I believe. Client is DBeaver on > Windows 7. >Reporter: JAMES J STEINBUGL >Assignee: JAMES J STEINBUGL >Priority: Minor > Attachments: HIVE-20073.patch, image-2018-07-03-08-50-42-390.png > > > I have the following script and I'm at loss to explain the behavior. > Possibly it's an older bug as we are using the 2.1.1 drivers (?). We noticed > this issue when converting from US/Eastern into UTC and then back to > US/Eastern. Everything that was in Status Date / Status Hour on 3/11/17 > 21:00:00 shifted 6 hours ahead into UTC ... then shifted back to 3/11/17 > 22:00:00 back in US/Eastern. The behavior appears to be the same using the > constant EST5EDT. EDT was effective on 3/12 2 am, so the issue appears only > at this boundary condition when we "spring ahead", but it at least on the > surface seems incorrect. > -- > -- Potential Issue with to_utc_timestamp > --- > SELECT '2017-03-11 18:00:00', to_utc_timestamp(timestamp '2017-03-11 > 18:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 19:00:00', to_utc_timestamp(timestamp '2017-03-11 > 19:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 20:00:00', to_utc_timestamp(timestamp '2017-03-11 > 20:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > {color:#FF}SELECT '2017-03-11 21:00:00', to_utc_timestamp(timestamp > '2017-03-11 21:00:00','US/Eastern'); -- Shifts ahead 6 hours (???){color} > {color:#FF}_c0 _c1 > 2017-03-11 21:00:00 2017-03-12 03:00:00{color} > SELECT '2017-03-11 22:00:00', to_utc_timestamp(timestamp '2017-03-11 > 22:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 23:00:00', to_utc_timestamp(timestamp '2017-03-11 > 23:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 00:00:00', to_utc_timestamp(timestamp '2017-03-12 > 00:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 01:00:00', to_utc_timestamp(timestamp '2017-03-12 > 01:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 02:00:00', to_utc_timestamp(timestamp '2017-03-12 > 02:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 03:00:00', to_utc_timestamp(timestamp '2017-03-12 > 03:00:00','US/Eastern'); -- Shifts ahead 4 hours as expected > SELECT '2017-03-12 04:00:00', to_utc_timestamp(timestamp '2017-03-12 > 04:00:00','US/Eastern'); -- Shifts ahead 4 hours as expected > SELECT '2017-03-12 05:00:00', to_utc_timestamp(timestamp '2017-03-12 > 05:00:00','US/Eastern'); -- Shifts ahead 4 hours as expected > !image-2018-07-03-08-50-42-390.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20073) Additional tests for to_utc_timestamp function based on HIVE-20068
[ https://issues.apache.org/jira/browse/HIVE-20073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532790#comment-16532790 ] Hive QA commented on HIVE-20073: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12375/dev-support/hive-personality.sh | | git revision | master / 3b6d4e2 | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12375/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Additional tests for to_utc_timestamp function based on HIVE-20068 > -- > > Key: HIVE-20073 > URL: https://issues.apache.org/jira/browse/HIVE-20073 > Project: Hive > Issue Type: Bug > Environment: MapR running on Linux I believe. Client is DBeaver on > Windows 7. >Reporter: JAMES J STEINBUGL >Assignee: JAMES J STEINBUGL >Priority: Minor > Attachments: HIVE-20073.patch, image-2018-07-03-08-50-42-390.png > > > I have the following script and I'm at loss to explain the behavior. > Possibly it's an older bug as we are using the 2.1.1 drivers (?). We noticed > this issue when converting from US/Eastern into UTC and then back to > US/Eastern. Everything that was in Status Date / Status Hour on 3/11/17 > 21:00:00 shifted 6 hours ahead into UTC ... then shifted back to 3/11/17 > 22:00:00 back in US/Eastern. The behavior appears to be the same using the > constant EST5EDT. EDT was effective on 3/12 2 am, so the issue appears only > at this boundary condition when we "spring ahead", but it at least on the > surface seems incorrect. > -- > -- Potential Issue with to_utc_timestamp > --- > SELECT '2017-03-11 18:00:00', to_utc_timestamp(timestamp '2017-03-11 > 18:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 19:00:00', to_utc_timestamp(timestamp '2017-03-11 > 19:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 20:00:00', to_utc_timestamp(timestamp '2017-03-11 > 20:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > {color:#FF}SELECT '2017-03-11 21:00:00', to_utc_timestamp(timestamp > '2017-03-11 21:00:00','US/Eastern'); -- Shifts ahead 6 hours (???){color} > {color:#FF}_c0 _c1 > 2017-03-11 21:00:00 2017-03-12 03:00:00{color} > SELECT '2017-03-11 22:00:00', to_utc_timestamp(timestamp '2017-03-11 > 22:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-11 23:00:00', to_utc_timestamp(timestamp '2017-03-11 > 23:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 00:00:00', to_utc_timestamp(timestamp '2017-03-12 > 00:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 01:00:00', to_utc_timestamp(timestamp '2017-03-12 > 01:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 02:00:00', to_utc_timestamp(timestamp '2017-03-12 > 02:00:00','US/Eastern'); -- Shifts ahead 5 hours as expected > SELECT '2017-03-12 03:00:00', to_utc_timestamp(timestamp '2017-03-12 > 03:00:00','US/Eastern'); -- Shifts ahead 4 hours as
[jira] [Commented] (HIVE-20037) Print root cause exception's toString() rather than getMessage()
[ https://issues.apache.org/jira/browse/HIVE-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532783#comment-16532783 ] Hive QA commented on HIVE-20037: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930155/HIVE-20037.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14637 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12374/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12374/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12374/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12930155 - PreCommit-HIVE-Build > Print root cause exception's toString() rather than getMessage() > > > Key: HIVE-20037 > URL: https://issues.apache.org/jira/browse/HIVE-20037 > Project: Hive > Issue Type: Sub-task > Components: Spark >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Trivial > Attachments: HIVE-20037.1.patch, HIVE-20037.2.patch > > > When we run HoS job and if it fails for some errors, we are printing the > exception message rather than exception toString(), for some exceptions, > e.g., this java.lang.NoClassDefFoundError, we are missing the exception type > information. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > org/apache/spark/SparkConf)' > {noformat} > If we use exception's toString(), it will be as follows and make more sense. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > java.lang.NoClassDefFoundError: org/apache/spark/SparkConf)' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20085) Druid-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional
[ https://issues.apache.org/jira/browse/HIVE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532749#comment-16532749 ] Nishant Bangarwa commented on HIVE-20085: - [~ashutoshc] attached a patch, please review. Changes Include - # During table creation If user has specified druid datasource in CREATE TABLE statement, schema will be discovered from druid otherwise user needs to provide the schema # Added of new config to enable CTAS - hive.ctas.external.tables, default to false # CTAS on an existing druid datasource will append data to any existing data present in druid # Druid Schema is now stored always, this is to allow ALTER Table statement on druid tables and have same semantics for table modifications whether the table was initially discovered from druid or created by HIVE. # Insert/Insert overwrite will be supported all Druid tables, when hive config hive.insert.into.external.tables is set to true. By default it is true # Drop will drop data when external.table.purge is true on the table, default for new druid tables is set via property - hive.external.table.purge.default (by default false) > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > --- > > Key: HIVE-20085 > URL: https://issues.apache.org/jira/browse/HIVE-20085 > Project: Hive > Issue Type: Bug > Components: Hive, StorageHandler >Affects Versions: 3.0.0 >Reporter: Dileep Kumar Chiguruvada >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-20085.patch > > > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > {code} > drop table if exists calcs; > create table calcs > STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' > TBLPROPERTIES ( > "druid.segment.granularity" = "MONTH", > "druid.query.granularity" = "DAY") > AS SELECT > cast(datetime0 as timestamp with local time zone) `__time`, > key, > str0, str1, str2, str3, > date0, date1, date2, date3, > time0, time1, > datetime0, datetime1, > zzz, > cast(bool0 as string) bool0, > cast(bool1 as string) bool1, > cast(bool2 as string) bool2, > cast(bool3 as string) bool3, > int0, int1, int2, int3, > num0, num1, num2, num3, num4 > from tableau_orc.calcs; > 2018-07-03 04:57:31,911|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Status: Running > (Executing on YARN cluster with App id application_1530592209763_0009) > ... > ... > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_BYTES_TO_MEM: > 0 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_PHASE_TIME: > 330 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SPILLED_RECORDS: 17 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : OUTPUT_RECORDS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > org.apache.hadoop.hive.llap.counters.LlapWmCounters: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : GUARANTEED_QUEUED_NS: > 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > GUARANTEED_RUNNING_NS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_QUEUED_NS: 2162643606 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_RUNNING_NS: 12151664909 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-2:DEPENDENCY_COLLECTION] in serial mode > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-0:MOVE] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Moving data to > directory > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/calcs > from > hdfs://myclu
[jira] [Updated] (HIVE-20088) Beeline config location path is assembled incorrectly
[ https://issues.apache.org/jira/browse/HIVE-20088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denes Bodo updated HIVE-20088: -- Attachment: HIVE-20088_001.patch > Beeline config location path is assembled incorrectly > - > > Key: HIVE-20088 > URL: https://issues.apache.org/jira/browse/HIVE-20088 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 3.0.0 >Reporter: Denes Bodo >Assignee: Denes Bodo >Priority: Critical > Labels: easyfix, usability > Attachments: HIVE-20088_001.patch > > > Checking the code in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/UserHS2ConnectionFileParser.java] > or in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/BeelineSiteParser.java] > I see {code}locations.add(ETC_HIVE_CONF_LOCATION + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > whee file separator shall be used: > {code}locations.add(ETC_HIVE_CONF_LOCATION + File.separator + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > Due to this, BeeLine cannot use configuration in case if this would be the > only way. > In my hadoop-3 setup, the locations list contains the following: > {code} > /home/myuser/.beeline/beeline-site.xml > /etc/hive/confbeeline-site.xml > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20088) Beeline config location path is assembled incorrectly
[ https://issues.apache.org/jira/browse/HIVE-20088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denes Bodo updated HIVE-20088: -- Status: Patch Available (was: Open) > Beeline config location path is assembled incorrectly > - > > Key: HIVE-20088 > URL: https://issues.apache.org/jira/browse/HIVE-20088 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 3.0.0 >Reporter: Denes Bodo >Assignee: Denes Bodo >Priority: Critical > Labels: easyfix, usability > Attachments: HIVE-20088_001.patch > > > Checking the code in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/UserHS2ConnectionFileParser.java] > or in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/BeelineSiteParser.java] > I see {code}locations.add(ETC_HIVE_CONF_LOCATION + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > whee file separator shall be used: > {code}locations.add(ETC_HIVE_CONF_LOCATION + File.separator + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > Due to this, BeeLine cannot use configuration in case if this would be the > only way. > In my hadoop-3 setup, the locations list contains the following: > {code} > /home/myuser/.beeline/beeline-site.xml > /etc/hive/confbeeline-site.xml > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20085) Druid-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional
[ https://issues.apache.org/jira/browse/HIVE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-20085: Attachment: HIVE-20085.patch > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > --- > > Key: HIVE-20085 > URL: https://issues.apache.org/jira/browse/HIVE-20085 > Project: Hive > Issue Type: Bug > Components: Hive, StorageHandler >Affects Versions: 3.0.0 >Reporter: Dileep Kumar Chiguruvada >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-20085.patch > > > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > {code} > drop table if exists calcs; > create table calcs > STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' > TBLPROPERTIES ( > "druid.segment.granularity" = "MONTH", > "druid.query.granularity" = "DAY") > AS SELECT > cast(datetime0 as timestamp with local time zone) `__time`, > key, > str0, str1, str2, str3, > date0, date1, date2, date3, > time0, time1, > datetime0, datetime1, > zzz, > cast(bool0 as string) bool0, > cast(bool1 as string) bool1, > cast(bool2 as string) bool2, > cast(bool3 as string) bool3, > int0, int1, int2, int3, > num0, num1, num2, num3, num4 > from tableau_orc.calcs; > 2018-07-03 04:57:31,911|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Status: Running > (Executing on YARN cluster with App id application_1530592209763_0009) > ... > ... > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_BYTES_TO_MEM: > 0 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_PHASE_TIME: > 330 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SPILLED_RECORDS: 17 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : OUTPUT_RECORDS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > org.apache.hadoop.hive.llap.counters.LlapWmCounters: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : GUARANTEED_QUEUED_NS: > 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > GUARANTEED_RUNNING_NS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_QUEUED_NS: 2162643606 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_RUNNING_NS: 12151664909 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-2:DEPENDENCY_COLLECTION] in serial mode > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-0:MOVE] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Moving data to > directory > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/calcs > from > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/.hive-staging_hive_2018-07-03_04-57-27_351_7124633902209008283-3/-ext-10002 > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-4:DDL] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|ERROR : FAILED: Execution > Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. > MetaException(message:Table druid_tableau.calcs failed strict managed table > checks due to the following reason: Table is marked as a managed table but is > not transactional.) > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Completed executing > command(queryId=hive_20180703045727_c39c40d2-7d4a-46c7-a36d-7925e7c4a788); > Time taken: 6.794 seconds > 2018-07-03 04:57:36,337|INFO|Thread-721|machine.py:111
[jira] [Updated] (HIVE-20085) Druid-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional
[ https://issues.apache.org/jira/browse/HIVE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-20085: Status: Patch Available (was: Open) > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > --- > > Key: HIVE-20085 > URL: https://issues.apache.org/jira/browse/HIVE-20085 > Project: Hive > Issue Type: Bug > Components: Hive, StorageHandler >Affects Versions: 3.0.0 >Reporter: Dileep Kumar Chiguruvada >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-20085.patch > > > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > {code} > drop table if exists calcs; > create table calcs > STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' > TBLPROPERTIES ( > "druid.segment.granularity" = "MONTH", > "druid.query.granularity" = "DAY") > AS SELECT > cast(datetime0 as timestamp with local time zone) `__time`, > key, > str0, str1, str2, str3, > date0, date1, date2, date3, > time0, time1, > datetime0, datetime1, > zzz, > cast(bool0 as string) bool0, > cast(bool1 as string) bool1, > cast(bool2 as string) bool2, > cast(bool3 as string) bool3, > int0, int1, int2, int3, > num0, num1, num2, num3, num4 > from tableau_orc.calcs; > 2018-07-03 04:57:31,911|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Status: Running > (Executing on YARN cluster with App id application_1530592209763_0009) > ... > ... > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_BYTES_TO_MEM: > 0 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_PHASE_TIME: > 330 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SPILLED_RECORDS: 17 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : OUTPUT_RECORDS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > org.apache.hadoop.hive.llap.counters.LlapWmCounters: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : GUARANTEED_QUEUED_NS: > 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > GUARANTEED_RUNNING_NS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_QUEUED_NS: 2162643606 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_RUNNING_NS: 12151664909 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-2:DEPENDENCY_COLLECTION] in serial mode > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-0:MOVE] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Moving data to > directory > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/calcs > from > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/.hive-staging_hive_2018-07-03_04-57-27_351_7124633902209008283-3/-ext-10002 > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-4:DDL] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|ERROR : FAILED: Execution > Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. > MetaException(message:Table druid_tableau.calcs failed strict managed table > checks due to the following reason: Table is marked as a managed table but is > not transactional.) > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Completed executing > command(queryId=hive_20180703045727_c39c40d2-7d4a-46c7-a36d-7925e7c4a788); > Time taken: 6.794 seconds > 2018-07-03 04:57:36,337|INFO|Thread-721|machine
[jira] [Updated] (HIVE-20088) Beeline config location path is assembled incorrectly
[ https://issues.apache.org/jira/browse/HIVE-20088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denes Bodo updated HIVE-20088: -- Labels: easyfix usability (was: ) > Beeline config location path is assembled incorrectly > - > > Key: HIVE-20088 > URL: https://issues.apache.org/jira/browse/HIVE-20088 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 3.0.0 >Reporter: Denes Bodo >Assignee: Denes Bodo >Priority: Critical > Labels: easyfix, usability > > Checking the code in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/UserHS2ConnectionFileParser.java] > or in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/BeelineSiteParser.java] > I see {code}locations.add(ETC_HIVE_CONF_LOCATION + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > whee file separator shall be used: > {code}locations.add(ETC_HIVE_CONF_LOCATION + File.separator + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > Due to this, BeeLine cannot use configuration in case if this would be the > only way. > In my hadoop-3 setup, the locations list contains the following: > {code} > /home/myuser/.beeline/beeline-site.xml > /etc/hive/confbeeline-site.xml > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20088) Beeline config location path is assembled incorrectly
[ https://issues.apache.org/jira/browse/HIVE-20088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denes Bodo reassigned HIVE-20088: - > Beeline config location path is assembled incorrectly > - > > Key: HIVE-20088 > URL: https://issues.apache.org/jira/browse/HIVE-20088 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 3.0.0 >Reporter: Denes Bodo >Assignee: Denes Bodo >Priority: Critical > > Checking the code in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/UserHS2ConnectionFileParser.java] > or in > [https://github.com/apache/hive/blob/branch-3/beeline/src/java/org/apache/hive/beeline/hs2connection/BeelineSiteParser.java] > I see {code}locations.add(ETC_HIVE_CONF_LOCATION + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > whee file separator shall be used: > {code}locations.add(ETC_HIVE_CONF_LOCATION + File.separator + > DEFAULT_BEELINE_SITE_FILE_NAME);{code} > Due to this, BeeLine cannot use configuration in case if this would be the > only way. > In my hadoop-3 setup, the locations list contains the following: > {code} > /home/myuser/.beeline/beeline-site.xml > /etc/hive/confbeeline-site.xml > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20067) fix InsertEvent on mm tables to not cause failing capability checks
[ https://issues.apache.org/jira/browse/HIVE-20067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532717#comment-16532717 ] Sankar Hariappan commented on HIVE-20067: - +1, pending tests > fix InsertEvent on mm tables to not cause failing capability checks > --- > > Key: HIVE-20067 > URL: https://issues.apache.org/jira/browse/HIVE-20067 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20067.01.patch, HIVE-20067.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20065) metastore should not rely on jackson 1.x
[ https://issues.apache.org/jira/browse/HIVE-20065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532692#comment-16532692 ] Sankar Hariappan commented on HIVE-20065: - +1, pending tests > metastore should not rely on jackson 1.x > > > Key: HIVE-20065 > URL: https://issues.apache.org/jira/browse/HIVE-20065 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20065.01.patch, HIVE-20065.02.patch > > > somehow jackson 1.x is on the classpath in some ide-s ...and somehow 1.x > org.codehaus jackson is being used from a dozen of classes - meanwhile the > pom.xml doesn't mention it at all; but only a fasterxml's 2.9.0 > I don't know where it gets the jackson 1.x implementation; but I think it > shouldn't rely on that... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20085) Druid-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional
[ https://issues.apache.org/jira/browse/HIVE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532690#comment-16532690 ] Nishant Bangarwa commented on HIVE-20085: - Discussion with [~ashutoshc] To play well in this new world of managed and external tables, we need to make all Druid tables as external. For that to work well, we will need following changes: 1) User will be required to specify external qualifier for creating Druid tables. 2) If user creates table from Hive without specifying druid datasource we will store the schema in HMS and use it. 3) If user creates table from Hive and specifies druid datasource then we don't store schema in HMS. 4) Inserts should be allowed to external tables (to be consistent with external tables semantics) 5) Drop will not delete any data from druid. They may use external.table.purge=true as tblprops to override this. > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > --- > > Key: HIVE-20085 > URL: https://issues.apache.org/jira/browse/HIVE-20085 > Project: Hive > Issue Type: Bug > Components: Hive, StorageHandler >Affects Versions: 3.0.0 >Reporter: Dileep Kumar Chiguruvada >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 3.0.0 > > > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > {code} > drop table if exists calcs; > create table calcs > STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' > TBLPROPERTIES ( > "druid.segment.granularity" = "MONTH", > "druid.query.granularity" = "DAY") > AS SELECT > cast(datetime0 as timestamp with local time zone) `__time`, > key, > str0, str1, str2, str3, > date0, date1, date2, date3, > time0, time1, > datetime0, datetime1, > zzz, > cast(bool0 as string) bool0, > cast(bool1 as string) bool1, > cast(bool2 as string) bool2, > cast(bool3 as string) bool3, > int0, int1, int2, int3, > num0, num1, num2, num3, num4 > from tableau_orc.calcs; > 2018-07-03 04:57:31,911|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Status: Running > (Executing on YARN cluster with App id application_1530592209763_0009) > ... > ... > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_BYTES_TO_MEM: > 0 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_PHASE_TIME: > 330 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SPILLED_RECORDS: 17 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : OUTPUT_RECORDS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > org.apache.hadoop.hive.llap.counters.LlapWmCounters: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : GUARANTEED_QUEUED_NS: > 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > GUARANTEED_RUNNING_NS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_QUEUED_NS: 2162643606 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_RUNNING_NS: 12151664909 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-2:DEPENDENCY_COLLECTION] in serial mode > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-0:MOVE] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Moving data to > directory > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/calcs > from > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/.hive-staging_hive_2018-07-03_04-57-27_351_7124633902209008283-3/-ext-10002 > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-4:DDL] in serial mo
[jira] [Assigned] (HIVE-20087) Fix reoptimization for semijoin reduction cases
[ https://issues.apache.org/jira/browse/HIVE-20087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reassigned HIVE-20087: --- Assignee: Zoltan Haindrich > Fix reoptimization for semijoin reduction cases > --- > > Key: HIVE-20087 > URL: https://issues.apache.org/jira/browse/HIVE-20087 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > > The real TS will get further info about the other table; which makes the > physically read record count inaccurate.. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20037) Print root cause exception's toString() rather than getMessage()
[ https://issues.apache.org/jira/browse/HIVE-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532689#comment-16532689 ] Hive QA commented on HIVE-20037: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 6m 0s{color} | {color:blue} ql in master has 2286 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s{color} | {color:red} ql: The patch generated 2 new + 11 unchanged - 0 fixed = 13 total (was 11) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12374/dev-support/hive-personality.sh | | git revision | master / 6311e0b | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12374/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12374/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Print root cause exception's toString() rather than getMessage() > > > Key: HIVE-20037 > URL: https://issues.apache.org/jira/browse/HIVE-20037 > Project: Hive > Issue Type: Sub-task > Components: Spark >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Trivial > Attachments: HIVE-20037.1.patch, HIVE-20037.2.patch > > > When we run HoS job and if it fails for some errors, we are printing the > exception message rather than exception toString(), for some exceptions, > e.g., this java.lang.NoClassDefFoundError, we are missing the exception type > information. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > org/apache/spark/SparkConf)' > {noformat} > If we use exception's toString(), it will be as follows and make more sense. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > java.lang.NoClassDefFoundError: org/apache/spark/SparkConf)' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20019) Remove commons-logging and move to slf4j
[ https://issues.apache.org/jira/browse/HIVE-20019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532680#comment-16532680 ] Hive QA commented on HIVE-20019: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 16s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 18s{color} | {color:blue} shims/common in master has 6 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 5s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 23s{color} | {color:blue} llap-tez in master has 17 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 3s{color} | {color:blue} ql in master has 2286 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 44s{color} | {color:blue} llap-server in master has 84 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 39s{color} | {color:blue} service in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} hcatalog/webhcat/svr in master has 96 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} The patch common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} The patch standalone-metastore passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} llap-tez: The patch generated 0 new + 4 unchanged - 3 fixed = 4 total (was 7) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 2 new + 354 unchanged - 31 fixed = 356 total (was 385) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} The patch llap-server passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} The patch service passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch hbase-handler passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch svr passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 51s{color} | {color:red} root: The patch generated 2 new + 429 unchang
[jira] [Commented] (HIVE-20019) Remove commons-logging and move to slf4j
[ https://issues.apache.org/jira/browse/HIVE-20019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532649#comment-16532649 ] Hive QA commented on HIVE-20019: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930145/HIVE-20019.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14635 tests executed *Failed tests:* {noformat} TestClearDanglingScratchDir - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestSparkStatistics - did not produce a TEST-*.xml file (likely timed out) (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12372/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12372/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12372/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12930145 - PreCommit-HIVE-Build > Remove commons-logging and move to slf4j > > > Key: HIVE-20019 > URL: https://issues.apache.org/jira/browse/HIVE-20019 > Project: Hive > Issue Type: Sub-task > Components: Logging >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-20019.1.patch, HIVE-20019.2.patch, > HIVE-20019.3.patch > > > Still seeing several references to commons-logging. We should move all > classes to slf4j instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20067) fix InsertEvent on mm tables to not cause failing capability checks
[ https://issues.apache.org/jira/browse/HIVE-20067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-20067: Attachment: HIVE-20067.02.patch > fix InsertEvent on mm tables to not cause failing capability checks > --- > > Key: HIVE-20067 > URL: https://issues.apache.org/jira/browse/HIVE-20067 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20067.01.patch, HIVE-20067.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20067) fix InsertEvent on mm tables to not cause failing capability checks
[ https://issues.apache.org/jira/browse/HIVE-20067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532631#comment-16532631 ] Zoltan Haindrich commented on HIVE-20067: - seems like the problematic codesnipplet have been copied into HiveMetastore.java as well - since I've created this patch... updating patch for a post-HIVE-19267 world... > fix InsertEvent on mm tables to not cause failing capability checks > --- > > Key: HIVE-20067 > URL: https://issues.apache.org/jira/browse/HIVE-20067 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20067.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20086) Druid-hive kafka ingestion: indexing tasks kept running even after setting 'druid.kafka.ingestion' = 'STOP'
[ https://issues.apache.org/jira/browse/HIVE-20086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dileep Kumar Chiguruvada reassigned HIVE-20086: --- Assignee: Nishant Bangarwa > Druid-hive kafka ingestion: indexing tasks kept running even after setting > 'druid.kafka.ingestion' = 'STOP' > --- > > Key: HIVE-20086 > URL: https://issues.apache.org/jira/browse/HIVE-20086 > Project: Hive > Issue Type: Bug > Components: Hive, StorageHandler >Affects Versions: 3.0.0 >Reporter: Dileep Kumar Chiguruvada >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 3.0.0 > > Attachments: Screen Shot 2018-07-02 at 8.51.10 PM.png > > > Druid-hive kafka ingestion: indexing tasks kept running even after setting > 'druid.kafka.ingestion' = 'STOP'. > when ingestion started( 'druid.kafka.ingestion' = 'START') the indexing task > start running and could able to load rows into Druid-Hive table. > But after stopping it still the indexing task kept running without getting > down gracefully. > The issue is for every START of ingestion this will pool up multiple indexing > tasks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20014) Druid SECOND/HOUR/MINUTE does not return correct values when applied to String Columns
[ https://issues.apache.org/jira/browse/HIVE-20014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-20014: Description: Reported by [~dileep529] - Query SELECT MINUTE(`time1`) FROM calcs; returns null when the String column only contains timestamp and does not contain any date information in the column. The Druid parser fails to parse the time string values and returns null. {code} 1: jdbc:hive2://ctr-e138-1518143905142-379982> SELECT MINUTE(`time1`) FROM calcs; INFO : Compiling command(queryId=hive_20180627145215_05147329-b8d8-491c-9bab-6fd5045542db): SELECT MINUTE(`time1`) FROM calcs INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:vc, type:int, comment:null)], properties:null) INFO : Completed compiling command(queryId=hive_20180627145215_05147329-b8d8-491c-9bab-6fd5045542db); Time taken: 0.134 seconds INFO : Executing command(queryId=hive_20180627145215_05147329-b8d8-491c-9bab-6fd5045542db): SELECT MINUTE(`time1`) FROM calcs INFO : Completed executing command(queryId=hive_20180627145215_05147329-b8d8-491c-9bab-6fd5045542db); Time taken: 0.002 seconds INFO : OK +---+ | vc | +---+ | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | +---+ 17 rows selected (0.266 seconds) 1: jdbc:hive2://ctr-e138-1518143905142-379982> SELECT time1 from calcs; INFO : Compiling command(queryId=hive_20180627145225_93b872de-a698-4859-9730-983eede6935d): SELECT time1 from calcs INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:time1, type:string, comment:null)], properties:null) INFO : Completed compiling command(queryId=hive_20180627145225_93b872de-a698-4859-9730-983eede6935d); Time taken: 0.116 seconds INFO : Executing command(queryId=hive_20180627145225_93b872de-a698-4859-9730-983eede6935d): SELECT time1 from calcs INFO : Completed executing command(queryId=hive_20180627145225_93b872de-a698-4859-9730-983eede6935d); Time taken: 0.003 seconds INFO : OK +---+ | time1 | +---+ | 22:20:14 | | 22:50:16 | | 19:36:22 | | 19:48:23 | | 00:05:57 | | NULL | | 04:48:07 | | NULL | | 19:57:33 | | NULL | | 04:40:49 | | 02:05:25 | | NULL | | NULL | | 12:33:57 | | 18:58:41 | | 09:33:31 | +---+ 17 rows selected (0.202 seconds) 1: jdbc:hive2://ctr-e138-1518143905142-379982> EXPLAIN SELECT MINUTE(`time1`) FROM calcs; INFO : Compiling command(queryId=hive_20180627145237_39e53a7e-35cb-4e17-8ccb-884c6f6358cd): EXPLAIN SELECT MINUTE(`time1`) FROM calcs INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], properties:null) INFO : Completed compiling command(queryId=hive_20180627145237_39e53a7e-35cb-4e17-8ccb-884c6f6358cd); Time taken: 0.107 seconds INFO : Executing command(queryId=hive_20180627145237_39e53a7e-35cb-4e17-8ccb-884c6f6358cd): EXPLAIN SELECT MINUTE(`time1`) FROM calcs INFO : Starting task [Stage-1:EXPLAIN] in serial mode INFO : Completed executing command(queryId=hive_20180627145237_39e53a7e-35cb-4e17-8ccb-884c6f6358cd); Time taken: 0.003 seconds INFO : OK ++ | Explain | ++ | Plan optimized by CBO. | || | Stage-0| | Fetch Operator | | limit:-1 | | Select Operator [SEL_1]| | Output:["_col0"] | | TableScan [TS_0] | | Output:["vc"],properties:{"druid.fieldNames":"vc","druid.fieldTypes":"int","druid.query.json":"{\"queryType\":\"scan\",\"dataSource\":\"druid_tableau.calcs\",\"intervals\":[\"1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z\"],\"virtualColumns\":[{\"type\":\"expression\",\"name\":\"vc\",\"expression\":\"timestamp_extract(timestamp_parse(\\\"time1\\\",null,'UTC'),'MINUTE','UTC')\",\"outputType\":\"LONG\"}],\"columns\":[\"vc\"],\"resultFormat\":\"compactedList\"}","druid.query.type":"scan"} | || ++ 10 rows selected (0.136 seconds) {code} was: Query SELECT MINUTE(`time1`) FROM calcs; returns null when the String column only contains timestamp and does not contain any date information in the column. The Druid parser fails to parse the time string values and returns null. {
[jira] [Updated] (HIVE-20013) Add an Implicit cast to date type for to_date function
[ https://issues.apache.org/jira/browse/HIVE-20013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-20013: Description: Issue - SELECT TO_DATE(date1), TO_DATE(datetime1) FROM druid_table_n1; Running this query on Druid returns null values when date1 and datetime1 are of type String. {code} INFO : Executing command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac): EXPLAIN SELECT TO_DATE(datetime0) ,TO_DATE(date0) FROM calcs INFO : Starting task [Stage-1:EXPLAIN] in serial mode INFO : Completed executing command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac); Time taken: 0.003 seconds INFO : OK ++ | Explain | ++ | Plan optimized by CBO. | || | Stage-0| | Fetch Operator | | limit:-1 | | Select Operator [SEL_1]| | Output:["_col0","_col1"] | | TableScan [TS_0] | | Output:["vc","vc0"],properties:{"druid.fieldNames":"vc,vc0","druid.fieldTypes":"date,date","druid.query.json":"{\"queryType\":\"scan\",\"dataSource\":\"druid_tableau.calcs\",\"intervals\":[\"1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z\"],\"virtualColumns\":[{\"type\":\"expression\",\"name\":\"vc\",\"expression\":\"timestamp_floor(\\\"datetime0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"},{\"type\":\"expression\",\"name\":\"vc0\",\"expression\":\"timestamp_floor(\\\"date0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"}],\"columns\":[\"vc\",\"vc0\"],\"resultFormat\":\"compactedList\"}","druid.query.type":"scan"} | || ++ 10 rows selected (0.606 seconds) {code} Reported by [~dileep529] was: Issue - SELECT TO_DATE(date1), TO_DATE(datetime1) FROM druid_table_n1; Running this query on Druid returns null values when date1 and datetime1 are of type String. {code} INFO : Executing command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac): EXPLAIN SELECT TO_DATE(datetime0) ,TO_DATE(date0) FROM calcs INFO : Starting task [Stage-1:EXPLAIN] in serial mode INFO : Completed executing command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac); Time taken: 0.003 seconds INFO : OK ++ | Explain | ++ | Plan optimized by CBO. | || | Stage-0| | Fetch Operator | | limit:-1 | | Select Operator [SEL_1]| | Output:["_col0","_col1"] | | TableScan [TS_0] | | Output:["vc","vc0"],properties:{"druid.fieldNames":"vc,vc0","druid.fieldTypes":"date,date","druid.query.json":"{\"queryType\":\"scan\",\"dataSource\":\"druid_tableau.calcs\",\"intervals\":[\"1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z\"],\"virtualColumns\":[{\"type\":\"expression\",\"name\":\"vc\",\"expression\":\"timestamp_floor(\\\"datetime0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"},{\"type\":\"expression\",\"name\":\"vc0\",\"expression\":\"timestamp_floor(\\\"date0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"}],\"columns\":[\"vc\",\"vc0\"],\"resultFormat\":\"compactedList\"}","druid.query.type":"scan"} | || ++ 10 rows selected (0.606 seconds) {code} > Add an Implicit cast to date type for to_date function > -- > > Key: HIVE-20013 > URL: https://issues.apache.org/jira/browse/HIVE-20013 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20013.patch, HIVE-20013.patch > > > Issue - > SELECT TO_DATE(date1), TO_DATE(datetime1) FROM druid_table_n1; > Running this query on Druid returns null values when date1 and datetime1 are > of type String. > {code} > INFO : Executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac): > EXPLAIN SELECT TO_DATE(datetime0) ,TO_DATE(date0) FROM calcs > INFO : Starting task [Stage-1:EXPLAIN] in serial mode > INFO : Completed executing > command
[jira] [Assigned] (HIVE-20085) Druid-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional
[ https://issues.apache.org/jira/browse/HIVE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dileep Kumar Chiguruvada reassigned HIVE-20085: --- Assignee: Nishant Bangarwa > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > --- > > Key: HIVE-20085 > URL: https://issues.apache.org/jira/browse/HIVE-20085 > Project: Hive > Issue Type: Bug > Components: Hive, StorageHandler >Affects Versions: 3.0.0 >Reporter: Dileep Kumar Chiguruvada >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 3.0.0 > > > Druid-Hive (managed) table creation fails with strict managed table checks: > Table is marked as a managed table but is not transactional > {code} > drop table if exists calcs; > create table calcs > STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' > TBLPROPERTIES ( > "druid.segment.granularity" = "MONTH", > "druid.query.granularity" = "DAY") > AS SELECT > cast(datetime0 as timestamp with local time zone) `__time`, > key, > str0, str1, str2, str3, > date0, date1, date2, date3, > time0, time1, > datetime0, datetime1, > zzz, > cast(bool0 as string) bool0, > cast(bool1 as string) bool1, > cast(bool2 as string) bool2, > cast(bool3 as string) bool3, > int0, int1, int2, int3, > num0, num1, num2, num3, num4 > from tableau_orc.calcs; > 2018-07-03 04:57:31,911|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Status: Running > (Executing on YARN cluster with App id application_1530592209763_0009) > ... > ... > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_BYTES_TO_MEM: > 0 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SHUFFLE_PHASE_TIME: > 330 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : SPILLED_RECORDS: 17 > 2018-07-03 04:57:36,334|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : OUTPUT_RECORDS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > org.apache.hadoop.hive.llap.counters.LlapWmCounters: > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : GUARANTEED_QUEUED_NS: > 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > GUARANTEED_RUNNING_NS: 0 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_QUEUED_NS: 2162643606 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : > SPECULATIVE_RUNNING_NS: 12151664909 > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-2:DEPENDENCY_COLLECTION] in serial mode > 2018-07-03 04:57:36,335|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-0:MOVE] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Moving data to > directory > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/calcs > from > hdfs://mycluster/warehouse/tablespace/managed/hive/druid_tableau.db/.hive-staging_hive_2018-07-03_04-57-27_351_7124633902209008283-3/-ext-10002 > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Starting task > [Stage-4:DDL] in serial mode > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|ERROR : FAILED: Execution > Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. > MetaException(message:Table druid_tableau.calcs failed strict managed table > checks due to the following reason: Table is marked as a managed table but is > not transactional.) > 2018-07-03 04:57:36,336|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa121a45-29eb-48a8-8628-ae5368aa172d|INFO : Completed executing > command(queryId=hive_20180703045727_c39c40d2-7d4a-46c7-a36d-7925e7c4a788); > Time taken: 6.794 seconds > 2018-07-03 04:57:36,337|INFO|Thread-721|machine.py:111 - > tee_pipe()||aa12
[jira] [Commented] (HIVE-20067) fix InsertEvent on mm tables to not cause failing capability checks
[ https://issues.apache.org/jira/browse/HIVE-20067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532592#comment-16532592 ] Hive QA commented on HIVE-20067: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930111/HIVE-20067.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14637 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=70) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] (batchId=152) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12371/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12371/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12371/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12930111 - PreCommit-HIVE-Build > fix InsertEvent on mm tables to not cause failing capability checks > --- > > Key: HIVE-20067 > URL: https://issues.apache.org/jira/browse/HIVE-20067 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20067.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18032) Stats: Consolidate stat state for limit 0 and where false
[ https://issues.apache.org/jira/browse/HIVE-18032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reassigned HIVE-18032: --- Assignee: Zoltan Haindrich > Stats: Consolidate stat state for limit 0 and where false > - > > Key: HIVE-18032 > URL: https://issues.apache.org/jira/browse/HIVE-18032 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > > query (from llap_nullscan.q): > {code} > explain > select * from (select key from src_orc where false) a left outer join > (select key from src_orc limit 0) > b on a.key=b.key > {code} > currently: > * limit 0 produces > Statistics: Num rows: 0 Data size: 0 Basic stats: COMPLETE Column stats: > NONE > * where false > Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-17533) Fill stats for temporary tables
[ https://issues.apache.org/jira/browse/HIVE-17533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich resolved HIVE-17533. - Resolution: Invalid for inline tables stats already seem to work...and in the mentioned test: I'm not sure what I was thinking aboutright now it doesn't seem to be anything can be done there so I think it's better to just close this :) > Fill stats for temporary tables > --- > > Key: HIVE-17533 > URL: https://issues.apache.org/jira/browse/HIVE-17533 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Priority: Major > > Doing {{insert into t values (...)}} initializes a temporary table with 0 > stats. > This could be made accurate since the contents of the table is already known. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20067) fix InsertEvent on mm tables to not cause failing capability checks
[ https://issues.apache.org/jira/browse/HIVE-20067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532558#comment-16532558 ] Hive QA commented on HIVE-20067: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 7s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 1s{color} | {color:blue} ql in master has 2286 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} standalone-metastore: The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} The patch ql passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12371/dev-support/hive-personality.sh | | git revision | master / 6311e0b | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12371/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > fix InsertEvent on mm tables to not cause failing capability checks > --- > > Key: HIVE-20067 > URL: https://issues.apache.org/jira/browse/HIVE-20067 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20067.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20067) fix InsertEvent on mm tables to not cause failing capability checks
[ https://issues.apache.org/jira/browse/HIVE-20067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532555#comment-16532555 ] Zoltan Haindrich commented on HIVE-20067: - [~ashutoshc] Could you please take a look? > fix InsertEvent on mm tables to not cause failing capability checks > --- > > Key: HIVE-20067 > URL: https://issues.apache.org/jira/browse/HIVE-20067 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20067.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-17555) StatsUtils considers all ranges to be 'long'; and loose precision / introduce bugs in some cases
[ https://issues.apache.org/jira/browse/HIVE-17555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich resolved HIVE-17555. - Resolution: Not A Problem although this might be problematic; since it doesn't cause any immediate problem - and it currently working fine...I'll close this... > StatsUtils considers all ranges to be 'long'; and loose precision / introduce > bugs in some cases > > > Key: HIVE-17555 > URL: https://issues.apache.org/jira/browse/HIVE-17555 > Project: Hive > Issue Type: Bug > Components: Statistics >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > > The following test fails because the combined range is: {{\[0:0\]}} > This problem is present in other methods of StatsUtil as well > {code} > package org.apache.hadoop.hive.ql.stats; > import static org.junit.Assert.assertTrue; > import org.apache.hadoop.hive.ql.plan.ColStatistics.Range; > import org.junit.Test; > public class TestStatsUtils { > @Test > public void test1() { > Range r1 = new Range(0.1f, 0.4f); > Range r2 = new Range(0.3f, 0.9f); > assertTrue(rangeContains(r1, 0.2f)); > Range r3 = StatsUtils.combineRange(r1, r2); > System.out.println(r3); > assertTrue(rangeContains(r3, 0.2f)); > } > private boolean rangeContains(Range range, Number f) { > double m = range.minValue.doubleValue(); > double M = range.maxValue.doubleValue(); > double v = f.doubleValue(); > return m <= v && v <= M; > } > } > {code} > https://github.com/apache/hive/blob/32e854ef1c25f21d53f7932723cfc76bf75a71cd/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java#L1955 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-15141) metrics reporter using HADOOP2 is not able to re-initialize - and prevents hiveserver2 recovery
[ https://issues.apache.org/jira/browse/HIVE-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich resolved HIVE-15141. - Resolution: Later > metrics reporter using HADOOP2 is not able to re-initialize - and prevents > hiveserver2 recovery > --- > > Key: HIVE-15141 > URL: https://issues.apache.org/jira/browse/HIVE-15141 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > > * hiveserver2 initializes {{MetricsFactory}} => CodahaleMetrics created => > registers HADOOP2 source > * exception from somewhere...possibly recoverable > * MetricsFactory deinitializes the backend with close() > * retries failing because the metrics system cant continue -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20065) metastore should not rely on jackson 1.x
[ https://issues.apache.org/jira/browse/HIVE-20065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-20065: Attachment: HIVE-20065.02.patch > metastore should not rely on jackson 1.x > > > Key: HIVE-20065 > URL: https://issues.apache.org/jira/browse/HIVE-20065 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20065.01.patch, HIVE-20065.02.patch > > > somehow jackson 1.x is on the classpath in some ide-s ...and somehow 1.x > org.codehaus jackson is being used from a dozen of classes - meanwhile the > pom.xml doesn't mention it at all; but only a fasterxml's 2.9.0 > I don't know where it gets the jackson 1.x implementation; but I think it > shouldn't rely on that... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20065) metastore should not rely on jackson 1.x
[ https://issues.apache.org/jira/browse/HIVE-20065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532507#comment-16532507 ] Zoltan Haindrich commented on HIVE-20065: - [~sankarh] Could you please take a look? > metastore should not rely on jackson 1.x > > > Key: HIVE-20065 > URL: https://issues.apache.org/jira/browse/HIVE-20065 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20065.01.patch, HIVE-20065.02.patch > > > somehow jackson 1.x is on the classpath in some ide-s ...and somehow 1.x > org.codehaus jackson is being used from a dozen of classes - meanwhile the > pom.xml doesn't mention it at all; but only a fasterxml's 2.9.0 > I don't know where it gets the jackson 1.x implementation; but I think it > shouldn't rely on that... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20066) hive.load.data.owner is compared to full principal
[ https://issues.apache.org/jira/browse/HIVE-20066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532503#comment-16532503 ] Hive QA commented on HIVE-20066: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12930109/HIVE-20066.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14636 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12370/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12370/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12370/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12930109 - PreCommit-HIVE-Build > hive.load.data.owner is compared to full principal > -- > > Key: HIVE-20066 > URL: https://issues.apache.org/jira/browse/HIVE-20066 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-20066.1.patch > > > HIVE-19928 compares the user running HS2 to the configured owner > (hive.load.data.owner) to check if we're able to move the file with LOAD DATA > or need to copy. > This check compares the full username (that may contain the full kerberos > principal) to hive.load.data.owner. We should compare to the short username > ({{UGI.getShortUserName()}}) instead. That's used in similar context > [here|https://github.com/apache/hive/blob/f519db7eafacb4b4d2d9fe2a9e10e908d8077224/common/src/java/org/apache/hadoop/hive/common/FileUtils.java#L398]. > cc [~djaiswal] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20067) fix InsertEvent on mm tables to not cause failing capability checks
[ https://issues.apache.org/jira/browse/HIVE-20067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532439#comment-16532439 ] Zoltan Haindrich commented on HIVE-20067: - [~teddy.choi] Could you please take a look? > fix InsertEvent on mm tables to not cause failing capability checks > --- > > Key: HIVE-20067 > URL: https://issues.apache.org/jira/browse/HIVE-20067 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20067.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20049) hive.groupby.limit.extrastep should be false by default
[ https://issues.apache.org/jira/browse/HIVE-20049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532430#comment-16532430 ] Laszlo Bodor commented on HIVE-20049: - test failures are related as the patch caused explain differences, i'll handle them in 04.patch > hive.groupby.limit.extrastep should be false by default > --- > > Key: HIVE-20049 > URL: https://issues.apache.org/jira/browse/HIVE-20049 > Project: Hive > Issue Type: Task > Components: Query Planning >Reporter: Ashutosh Chauhan >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-20049.01.patch, HIVE-20049.02.patch, > HIVE-20049.03.patch > > > In fact this flag is not needed since FetchTask can enforce limit there is > never a reason to have another vertex purely for limit. > > It was introduced by HIVE-12963 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20041) ResultsCache: Improve loggng for concurrent queries
[ https://issues.apache.org/jira/browse/HIVE-20041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532405#comment-16532405 ] Laszlo Bodor commented on HIVE-20041: - failure is not related, passed locally, uploaded 03.patch in case a new test needed > ResultsCache: Improve loggng for concurrent queries > --- > > Key: HIVE-20041 > URL: https://issues.apache.org/jira/browse/HIVE-20041 > Project: Hive > Issue Type: Improvement > Components: Diagnosability >Reporter: Gopal V >Assignee: Laszlo Bodor >Priority: Minor > Attachments: HIVE-20041.01.patch, HIVE-20041.02.patch, > HIVE-20041.03.patch > > > The logging for QueryResultsCache ends up printing information without > context, like > {code} > 2018-06-30T17:48:45,502 INFO [HiveServer2-Background-Pool: Thread-166] > results.QueryResultsCache: Waiting on pending cacheEntry > {code} > {code} > 2018-06-30T17:50:17,963 INFO [HiveServer2-Background-Pool: Thread-145] > ql.Driver: savedToCache: true > {code} > The previous lines for this are in DEBUG level, so the logging ends up being > useless at INFO level to debug. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20041) ResultsCache: Improve loggng for concurrent queries
[ https://issues.apache.org/jira/browse/HIVE-20041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor updated HIVE-20041: Attachment: HIVE-20041.03.patch > ResultsCache: Improve loggng for concurrent queries > --- > > Key: HIVE-20041 > URL: https://issues.apache.org/jira/browse/HIVE-20041 > Project: Hive > Issue Type: Improvement > Components: Diagnosability >Reporter: Gopal V >Assignee: Laszlo Bodor >Priority: Minor > Attachments: HIVE-20041.01.patch, HIVE-20041.02.patch, > HIVE-20041.03.patch > > > The logging for QueryResultsCache ends up printing information without > context, like > {code} > 2018-06-30T17:48:45,502 INFO [HiveServer2-Background-Pool: Thread-166] > results.QueryResultsCache: Waiting on pending cacheEntry > {code} > {code} > 2018-06-30T17:50:17,963 INFO [HiveServer2-Background-Pool: Thread-145] > ql.Driver: savedToCache: true > {code} > The previous lines for this are in DEBUG level, so the logging ends up being > useless at INFO level to debug. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20025) Clean-up of event files created by HiveProtoLoggingHook.
[ https://issues.apache.org/jira/browse/HIVE-20025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532394#comment-16532394 ] Sankar Hariappan commented on HIVE-20025: - 04.patch is committed to master! Thanks [~harishjp] and [~anishek] for the review! > Clean-up of event files created by HiveProtoLoggingHook. > > > Key: HIVE-20025 > URL: https://issues.apache.org/jira/browse/HIVE-20025 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: Hive, hooks, pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-20025.01-branch-3.patch, HIVE-20025.01.patch, > HIVE-20025.02.patch, HIVE-20025.03.patch, HIVE-20025.04.patch > > > Currently, HiveProtoLoggingHook write event data to hdfs. The number of files > can grow to very large numbers. > Since the files are created under a folder with Date being a part of the > path, hive should have a way to clean up data older than a certain configured > time / date. This can be a job that can run with as little frequency as just > once a day. > This time should be set to 1 week default. There should also be a sane upper > bound of # of files so that when a large cluster generates a lot of files > during a spike, we don't force the cluster fall over. -- This message was sent by Atlassian JIRA (v7.6.3#76005)