[jira] [Assigned] (HIVE-18736) Create Table Like doc needs to be updated
[ https://issues.apache.org/jira/browse/HIVE-18736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Madhudeep Petwal reassigned HIVE-18736: --- Assignee: Nikhil Harsoor (was: Madhudeep Petwal) > Create Table Like doc needs to be updated > - > > Key: HIVE-18736 > URL: https://issues.apache.org/jira/browse/HIVE-18736 > Project: Hive > Issue Type: Bug > Components: Documentation >Reporter: Eugene Koifman >Assignee: Nikhil Harsoor >Priority: Major > > https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTableLike > needs to be updated. > according to HiveParser.g the syntax is much richer that what is in the doc > {noformat} > -> ^(TOK_CREATETABLE $name $temp? $ext? ifNotExists? > ^(TOK_LIKETABLE $likeName?) > columnNameTypeOrConstraintList? > tableComment? > tablePartition? > tableBuckets? > tableSkewed? > tableRowFormat? > tableFileFormat? > tableLocation? > tablePropertiesPrefixed? > selectStatementWithCTE? > ) > {noformat} > I tried specifying TBLPROPERTIES on current master (Hive 3.0) and it works. > Updated doc accordingly but more verification/doc changes are needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18736) Create Table Like doc needs to be updated
[ https://issues.apache.org/jira/browse/HIVE-18736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Madhudeep Petwal reassigned HIVE-18736: --- Assignee: Madhudeep Petwal > Create Table Like doc needs to be updated > - > > Key: HIVE-18736 > URL: https://issues.apache.org/jira/browse/HIVE-18736 > Project: Hive > Issue Type: Bug > Components: Documentation >Reporter: Eugene Koifman >Assignee: Madhudeep Petwal >Priority: Major > > https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTableLike > needs to be updated. > according to HiveParser.g the syntax is much richer that what is in the doc > {noformat} > -> ^(TOK_CREATETABLE $name $temp? $ext? ifNotExists? > ^(TOK_LIKETABLE $likeName?) > columnNameTypeOrConstraintList? > tableComment? > tablePartition? > tableBuckets? > tableSkewed? > tableRowFormat? > tableFileFormat? > tableLocation? > tablePropertiesPrefixed? > selectStatementWithCTE? > ) > {noformat} > I tried specifying TBLPROPERTIES on current master (Hive 3.0) and it works. > Updated doc accordingly but more verification/doc changes are needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17178) Spark Partition Pruning Sink Operator can't target multiple Works
[ https://issues.apache.org/jira/browse/HIVE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368436#comment-16368436 ] Hive QA commented on HIVE-17178: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911046/HIVE-17178.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 13786 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_bmj_schema_evolution] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[ambiguous_join_col] (batchId=247) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9266/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9266/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9266/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 33 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911046 - PreCommit-HIVE-Build > Spark Partition Pruning Sink Operator can't target multiple Works > - > > Key: HIVE-17178 > URL: https://issues.apache.org/jira/browse/HIVE-17178 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Rui Li >Priority: Major > Attachments: HIVE-17178.1.patch, HIVE-17178.2.patch, > HIVE-17178.3.patch, HIVE-17178.4.patch > > > A Spark Partition Pruning Sink Operator cannot be used to target multiple Map > Work objects. The entire DPP subtree (SEL-GBY-SPARKPRUNINGSINK) is duplicated > if a single
[jira] [Commented] (HIVE-17178) Spark Partition Pruning Sink Operator can't target multiple Works
[ https://issues.apache.org/jira/browse/HIVE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368432#comment-16368432 ] Hive QA commented on HIVE-17178: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s{color} | {color:red} ql: The patch generated 3 new + 69 unchanged - 3 fixed = 72 total (was 72) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / e0bf12d | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9266/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9266/yetus/patch-asflicense-problems.txt | | modules | C: itests ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9266/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Spark Partition Pruning Sink Operator can't target multiple Works > - > > Key: HIVE-17178 > URL: https://issues.apache.org/jira/browse/HIVE-17178 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Rui Li >Priority: Major > Attachments: HIVE-17178.1.patch, HIVE-17178.2.patch, > HIVE-17178.3.patch, HIVE-17178.4.patch > > > A Spark Partition Pruning Sink Operator cannot be used to target multiple Map > Work objects. The entire DPP subtree (SEL-GBY-SPARKPRUNINGSINK) is duplicated > if a single table needs to be used to target multiple Map Works. > The following query shows the issue: > {code} > set hive.spark.dynamic.partition.pruning=true; > set hive.auto.convert.join=true; > create table part_table_1 (col int) partitioned by (part_col int); > create table part_table_2 (col int) partitioned by (part_col int); > create table regular_table (col int); > insert into table regular_table values (1); > alter table part_table_1 add partition (part_col=1); > insert into table part_table_1 partition (part_col=1) values (1), (2), (3), > (4); > alter table part_table_1 add partition (part_col=2); > insert into table part_table_1 partition (part_col=2)
[jira] [Updated] (HIVE-18693) Snapshot Isolation does not work for Micromanaged table when a insert transaction is aborted
[ https://issues.apache.org/jira/browse/HIVE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-18693: -- Status: Open (was: Patch Available) > Snapshot Isolation does not work for Micromanaged table when a insert > transaction is aborted > > > Key: HIVE-18693 > URL: https://issues.apache.org/jira/browse/HIVE-18693 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Attachments: HIVE-18693.01.patch > > > TestTxnCommands2#writeBetweenWorkerAndCleaner with minor > changes (changing delete command to insert command) fails on MM table. > Specifically the last SELECT commands returns wrong results. > But this test works fine with full ACID table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17178) Spark Partition Pruning Sink Operator can't target multiple Works
[ https://issues.apache.org/jira/browse/HIVE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368421#comment-16368421 ] Rui Li commented on HIVE-17178: --- Update test to give table different sizes, so that query plan should be more deterministic. Tried {{bucketizedhiveinputformat}} locally and it fails due to OOME. It fails in master too, so not related to the patch here. {{bucketmapjoin6}} and {{dynamic_rdd_cache}} cannot be reproduced locally. {{spark_opt_shuffle_serde}} should have already been fixed. > Spark Partition Pruning Sink Operator can't target multiple Works > - > > Key: HIVE-17178 > URL: https://issues.apache.org/jira/browse/HIVE-17178 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Rui Li >Priority: Major > Attachments: HIVE-17178.1.patch, HIVE-17178.2.patch, > HIVE-17178.3.patch, HIVE-17178.4.patch > > > A Spark Partition Pruning Sink Operator cannot be used to target multiple Map > Work objects. The entire DPP subtree (SEL-GBY-SPARKPRUNINGSINK) is duplicated > if a single table needs to be used to target multiple Map Works. > The following query shows the issue: > {code} > set hive.spark.dynamic.partition.pruning=true; > set hive.auto.convert.join=true; > create table part_table_1 (col int) partitioned by (part_col int); > create table part_table_2 (col int) partitioned by (part_col int); > create table regular_table (col int); > insert into table regular_table values (1); > alter table part_table_1 add partition (part_col=1); > insert into table part_table_1 partition (part_col=1) values (1), (2), (3), > (4); > alter table part_table_1 add partition (part_col=2); > insert into table part_table_1 partition (part_col=2) values (1), (2), (3), > (4); > alter table part_table_2 add partition (part_col=1); > insert into table part_table_2 partition (part_col=1) values (1), (2), (3), > (4); > alter table part_table_2 add partition (part_col=2); > insert into table part_table_2 partition (part_col=2) values (1), (2), (3), > (4); > explain select * from regular_table, part_table_1, part_table_2 where > regular_table.col = part_table_1.part_col and regular_table.col = > part_table_2.part_col; > {code} > The explain plan is > {code} > STAGE DEPENDENCIES: > Stage-2 is a root stage > Stage-1 depends on stages: Stage-2 > Stage-0 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-2 > Spark > A masked pattern was here > Vertices: > Map 1 > Map Operator Tree: > TableScan > alias: regular_table > Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE > Column stats: NONE > Filter Operator > predicate: col is not null (type: boolean) > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: col (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark HashTable Sink Operator > keys: > 0 _col0 (type: int) > 1 _col1 (type: int) > 2 _col1 (type: int) > Select Operator > expressions: _col0 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark Partition Pruning Sink Operator > partition key expr: part_col > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > target column name: part_col > target work: Map 2 > Select Operator > expressions: _col0 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 1 Data
[jira] [Updated] (HIVE-17178) Spark Partition Pruning Sink Operator can't target multiple Works
[ https://issues.apache.org/jira/browse/HIVE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HIVE-17178: -- Attachment: HIVE-17178.4.patch > Spark Partition Pruning Sink Operator can't target multiple Works > - > > Key: HIVE-17178 > URL: https://issues.apache.org/jira/browse/HIVE-17178 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Rui Li >Priority: Major > Attachments: HIVE-17178.1.patch, HIVE-17178.2.patch, > HIVE-17178.3.patch, HIVE-17178.4.patch > > > A Spark Partition Pruning Sink Operator cannot be used to target multiple Map > Work objects. The entire DPP subtree (SEL-GBY-SPARKPRUNINGSINK) is duplicated > if a single table needs to be used to target multiple Map Works. > The following query shows the issue: > {code} > set hive.spark.dynamic.partition.pruning=true; > set hive.auto.convert.join=true; > create table part_table_1 (col int) partitioned by (part_col int); > create table part_table_2 (col int) partitioned by (part_col int); > create table regular_table (col int); > insert into table regular_table values (1); > alter table part_table_1 add partition (part_col=1); > insert into table part_table_1 partition (part_col=1) values (1), (2), (3), > (4); > alter table part_table_1 add partition (part_col=2); > insert into table part_table_1 partition (part_col=2) values (1), (2), (3), > (4); > alter table part_table_2 add partition (part_col=1); > insert into table part_table_2 partition (part_col=1) values (1), (2), (3), > (4); > alter table part_table_2 add partition (part_col=2); > insert into table part_table_2 partition (part_col=2) values (1), (2), (3), > (4); > explain select * from regular_table, part_table_1, part_table_2 where > regular_table.col = part_table_1.part_col and regular_table.col = > part_table_2.part_col; > {code} > The explain plan is > {code} > STAGE DEPENDENCIES: > Stage-2 is a root stage > Stage-1 depends on stages: Stage-2 > Stage-0 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-2 > Spark > A masked pattern was here > Vertices: > Map 1 > Map Operator Tree: > TableScan > alias: regular_table > Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE > Column stats: NONE > Filter Operator > predicate: col is not null (type: boolean) > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: col (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark HashTable Sink Operator > keys: > 0 _col0 (type: int) > 1 _col1 (type: int) > 2 _col1 (type: int) > Select Operator > expressions: _col0 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark Partition Pruning Sink Operator > partition key expr: part_col > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > target column name: part_col > target work: Map 2 > Select Operator > expressions: _col0 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark Partition Pruning Sink Operator > partition key expr: part_col > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > target column name: part_col >
[jira] [Commented] (HIVE-18051) qfiles: dataset support
[ https://issues.apache.org/jira/browse/HIVE-18051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368397#comment-16368397 ] Hive QA commented on HIVE-18051: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911034/HIVE-18051.10.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 32 failed/errored test(s), 13791 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[testdataset] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[testdataset_2] (batchId=16) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9265/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9265/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9265/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 32 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911034 - PreCommit-HIVE-Build > qfiles: dataset support > --- > > Key: HIVE-18051 > URL: https://issues.apache.org/jira/browse/HIVE-18051 > Project: Hive > Issue Type: Improvement > Components: Testing Infrastructure >Reporter: Zoltan Haindrich >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-18051.01.patch, HIVE-18051.02.patch, > HIVE-18051.03.patch, HIVE-18051.04.patch, HIVE-18051.05.patch, > HIVE-18051.06.patch, HIVE-18051.07.patch, HIVE-18051.08.patch, > HIVE-18051.09.patch, HIVE-18051.10.patch > > > it would be great to have some kind of test dataset support; currently there > is the {{q_test_init.sql}} which is quite large; and I'm often override it > with an invalid string; because I write independent qtests most of the time
[jira] [Commented] (HIVE-18051) qfiles: dataset support
[ https://issues.apache.org/jira/browse/HIVE-18051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368389#comment-16368389 ] Hive QA commented on HIVE-18051: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 47s{color} | {color:red} root: The patch generated 7 new + 167 unchanged - 9 fixed = 174 total (was 176) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s{color} | {color:red} itests/util: The patch generated 7 new + 162 unchanged - 9 fixed = 169 total (was 171) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / e0bf12d | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9265/yetus/diff-checkstyle-root.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9265/yetus/diff-checkstyle-itests_util.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9265/yetus/patch-asflicense-problems.txt | | modules | C: . itests/util ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9265/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > qfiles: dataset support > --- > > Key: HIVE-18051 > URL: https://issues.apache.org/jira/browse/HIVE-18051 > Project: Hive > Issue Type: Improvement > Components: Testing Infrastructure >Reporter: Zoltan Haindrich >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-18051.01.patch, HIVE-18051.02.patch, > HIVE-18051.03.patch, HIVE-18051.04.patch, HIVE-18051.05.patch, > HIVE-18051.06.patch, HIVE-18051.07.patch, HIVE-18051.08.patch, > HIVE-18051.09.patch, HIVE-18051.10.patch > > > it would be great to have some kind of test dataset support; currently there > is the {{q_test_init.sql}} which is quite large; and I'm often override it > with an invalid string; because I write independent qtests most of the time - > and the load of {{src}} and other tables are just a waste of time for me ; > not to mention that the loading of those tables may also trigger breakpoints > - which is a bit annoying. >
[jira] [Commented] (HIVE-18433) Upgrade version of com.fasterxml.jackson
[ https://issues.apache.org/jira/browse/HIVE-18433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368369#comment-16368369 ] Hive QA commented on HIVE-18433: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911032/HIVE-18433.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 38 failed/errored test(s), 13785 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druid_timestamptz] (batchId=248) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test1] (batchId=248) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_insert] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testTimeZone (batchId=256) org.apache.hadoop.hive.druid.serde.TestDruidSerDe.testDruidDeserializer (batchId=256) org.apache.hadoop.hive.druid.serde.TestDruidSerDe.testDruidObjectDeserializer (batchId=256) org.apache.hadoop.hive.druid.serde.TestDruidSerDe.testDruidObjectSerializer (batchId=256) org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded] (batchId=205) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9264/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9264/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9264/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 38 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911032 - PreCommit-HIVE-Build > Upgrade version of com.fasterxml.jackson > > > Key: HIVE-18433 > URL: https://issues.apache.org/jira/browse/HIVE-18433 > Project: Hive > Issue Type:
[jira] [Commented] (HIVE-18433) Upgrade version of com.fasterxml.jackson
[ https://issues.apache.org/jira/browse/HIVE-18433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368367#comment-16368367 ] Hive QA commented on HIVE-18433: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 10s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 52 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / e0bf12d | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9264/yetus/patch-asflicense-problems.txt | | modules | C: common . druid-handler hcatalog/core hcatalog/server-extensions hcatalog/webhcat/svr itests/hive-blobstore itests/qtest-druid ql spark-client standalone-metastore testutils/ptest2 U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9264/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade version of com.fasterxml.jackson > > > Key: HIVE-18433 > URL: https://issues.apache.org/jira/browse/HIVE-18433 > Project: Hive > Issue Type: Task >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-18433.1.patch, HIVE-18433.2.patch, > HIVE-18433.3.patch > > > Let's upgrade to version 2.9.2 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18051) qfiles: dataset support
[ https://issues.apache.org/jira/browse/HIVE-18051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor updated HIVE-18051: Attachment: HIVE-18051.10.patch > qfiles: dataset support > --- > > Key: HIVE-18051 > URL: https://issues.apache.org/jira/browse/HIVE-18051 > Project: Hive > Issue Type: Improvement > Components: Testing Infrastructure >Reporter: Zoltan Haindrich >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-18051.01.patch, HIVE-18051.02.patch, > HIVE-18051.03.patch, HIVE-18051.04.patch, HIVE-18051.05.patch, > HIVE-18051.06.patch, HIVE-18051.07.patch, HIVE-18051.08.patch, > HIVE-18051.09.patch, HIVE-18051.10.patch > > > it would be great to have some kind of test dataset support; currently there > is the {{q_test_init.sql}} which is quite large; and I'm often override it > with an invalid string; because I write independent qtests most of the time - > and the load of {{src}} and other tables are just a waste of time for me ; > not to mention that the loading of those tables may also trigger breakpoints > - which is a bit annoying. > Most of the tests are "only" using the {{src}} table and possibly 2 others; > however the main init script contains a bunch of tables - meanwhile there are > quite few other tests which could possibly also benefit from a more general > feature; for example the creation of {{bucket_small}} is present in 20 q > files. > the proposal would be to enable the qfiles to be annotated with metadata like > datasets: > {code} > --! qt:dataset:src,bucket_small > {code} > proposal for storing a dataset: > * the loader script would be at: {{data/datasets/__NAME__/load.hive.sql}} > * the table data could be stored under that location > a draft about this; and other qfiles related ideas: > https://docs.google.com/document/d/1KtcIx8ggL9LxDintFuJo8NQuvNWkmtvv_ekbWrTLNGc/edit?usp=sharing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18433) Upgrade version of com.fasterxml.jackson
[ https://issues.apache.org/jira/browse/HIVE-18433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18433: --- Attachment: HIVE-18433.3.patch > Upgrade version of com.fasterxml.jackson > > > Key: HIVE-18433 > URL: https://issues.apache.org/jira/browse/HIVE-18433 > Project: Hive > Issue Type: Task >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-18433.1.patch, HIVE-18433.2.patch, > HIVE-18433.3.patch > > > Let's upgrade to version 2.9.2 -- This message was sent by Atlassian JIRA (v7.6.3#76005)