[jira] [Updated] (HIVE-18582) MSCK REPAIR TABLE Throw MetaException

2018-01-30 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HIVE-18582:
---
Attachment: HIVE-18582.patch

>  MSCK REPAIR TABLE Throw MetaException
> --
>
> Key: HIVE-18582
> URL: https://issues.apache.org/jira/browse/HIVE-18582
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.1.1
>Reporter: liubangchen
>Priority: Major
> Attachments: HIVE-18582.patch
>
>
> while executing query MSCK REPAIR TABLE tablename I got Exception:
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1847)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:402)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> --
> Caused by: MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at 
> org.apache.hadoop.hive.metastore.Warehouse.makeValsFromName(Warehouse.java:385)
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1845)
> {code}
> table PARTITIONED by (log_date,vgameid)
> The data file on HDFS is:
>  
> {code:java}
> /usr/hive/warehouse/a.db/tablename/log_date=2015063023
> drwxr-xr-x - root supergroup 0 2018-01-26 09:41 
> /usr/hive/warehouse/a.db/tablename/log_date=2015121309/vgameid=lyjt
> {code}
> The subdir of log_data=2015063023 is empty
> If i set  hive.msck.path.validation=ignore Then msck repair table will 
> executed ok.
> Then I found code like this:
> {code:java}
> private int msck(Hive db, MsckDesc msckDesc) {
>   CheckResult result = new CheckResult();
>   List repairOutput = new ArrayList();
>   try {
> HiveMetaStoreChecker checker = new HiveMetaStoreChecker(db);
> String[] names = Utilities.getDbTableName(msckDesc.getTableName());
> checker.checkMetastore(names[0], names[1], msckDesc.getPartSpecs(), 
> result);
> List partsNotInMs = 
> result.getPartitionsNotInMs();
> if (msckDesc.isRepairPartitions() && !partsNotInMs.isEmpty()) {
>  //I think bug is here
>   AbstractList vals = null;
>   String settingStr = HiveConf.getVar(conf, 
> HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION);
>   boolean doValidate = !("ignore".equals(settingStr));
>   boolean doSkip = doValidate && "skip".equals(settingStr);
>   // The default setting is "throw"; assume doValidate && !doSkip means 
> throw.
>   if (doValidate) {
> // Validate that we can add partition without escaping. Escaping was 
> originally intended
> // to avoid creating invalid HDFS paths; however, if we escape the 
> HDFS path (that we
> // deem invalid but HDFS actually supports - it is possible to create 
> HDFS paths with
> // unprintable characters like ASCII 7), metastore will create 
> another directory instead
> // of the one we are trying to "repair" here.
> Iterator iter = partsNotInMs.iterator();
> while (iter.hasNext()) {
>   CheckResult.PartitionResult part = iter.next();
>   try {
> vals = Warehouse.makeValsFromName(part.getPartitionName(), vals);
>   } catch (MetaException ex) {
> throw new HiveException(ex);
>   }
>   for (String val : vals) {
> String escapedPath = FileUtils.escapePathName(val);
> assert escapedPath != null;
> if (escapedPath.equals(val)) continue;
> String errorMsg = "Repair: Cannot add partition " + 
> msckDesc.getTableName()
> + ':' + part.getPartitionName() + " due to invalid characters 
> in the name";
> if (doSkip) {
>   repairOutput.add(errorMsg);
>   iter.remove();
> } else {
>   throw new HiveException(errorMsg);
> }
>   }
> }
>   }
> {code}
> I think  AbstractList vals = null; must placed after  "while 
> (iter.hasNext()) {" will work ok.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18048) Vectorization: Support Struct type with vectorization

2018-01-30 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346388#comment-16346388
 ] 

Ferdinand Xu commented on HIVE-18048:
-

LGTM +1 pending on the test.

> Vectorization: Support Struct type with vectorization
> -
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Improvement
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch, HIVE-18048.005.patch, 
> HIVE-18048.006.patch, HIVE-18048.007.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
>  New UDF will be added to access the field of Struct.
> Note:
>  * Nested complex type won't be tested in this ticket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-18582) MSCK REPAIR TABLE Throw MetaException

2018-01-30 Thread liubangchen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346386#comment-16346386
 ] 

liubangchen edited comment on HIVE-18582 at 1/31/18 7:54 AM:
-

We can add a method to valid the method findUnknownPartitions of class 
HiveMetaStoreChecker 

 
{code:java}

void findUnknownPartitions(Table table, Set partPaths,
  CheckResult result) throws IOException, HiveException {

Path tablePath = table.getPath();
// now check the table folder and see if we find anything
// that isn't in the metastore
Set allPartDirs = new HashSet();
getAllLeafDirs(tablePath, allPartDirs);
// don't want the table dir
allPartDirs.remove(tablePath);

// remove the partition paths we know about
allPartDirs.removeAll(partPaths);

// we should now only have the unexpected folders left
for (Path partPath : allPartDirs) {
  if(!isVaildPartitionPath(table,partPath)){
LOG.warn("invalid data path:"+partPath.toString());
continue;
  }
  FileSystem fs = partPath.getFileSystem(conf);
  String partitionName = getPartitionName(fs.makeQualified(tablePath),
  partPath);

  if (partitionName != null) {
PartitionResult pr = new PartitionResult();
pr.setPartitionName(partitionName);
pr.setTableName(table.getTableName());

result.getPartitionsNotInMs().add(pr);
  }
}
  }

  boolean isVaildPartitionPath(Table table,Path partpath){
Path tablePath = table.getPath();
String partpathinfo=partpath.toString();
String 
partinfo=partpathinfo.substring(tablePath.toString().length()+1,partpathinfo.length());
if(partinfo==null||"".equals(partinfo)){
  return false;
}
String[] parts=partinfo.split("/");
if(parts==null||parts.length==0){
  return false;
}
Map partsmap=new java.util.HashMap();
for(String part:parts){
  int index=part.indexOf("=");
  if(index<0){
continue;
  }
  String partname=part.substring(0,index);
  partsmap.put(partname,partname);
}
for (FieldSchema field : table.getPartCols()) {
  String val = partsmap.get(field.getName());
  if (val == null || val.isEmpty()) {
return false;
  }
}
return true;
  }
{code}

Let me submit the patch.
 


was (Author: liubangchen):
We can add a method to valid the method findUnknownPartitions of class 
HiveMetaStoreChecker 

 
{code:java}

void findUnknownPartitions(Table table, Set partPaths,
  CheckResult result) throws IOException, HiveException {

Path tablePath = table.getPath();
// now check the table folder and see if we find anything
// that isn't in the metastore
Set allPartDirs = new HashSet();
getAllLeafDirs(tablePath, allPartDirs);
// don't want the table dir
allPartDirs.remove(tablePath);

// remove the partition paths we know about
allPartDirs.removeAll(partPaths);

// we should now only have the unexpected folders left
for (Path partPath : allPartDirs) {
  if(!isVaildPartitionPath(table,partPath)){
LOG.warn("invalid data path:"+partPath.toString());
continue;
  }
  FileSystem fs = partPath.getFileSystem(conf);
  String partitionName = getPartitionName(fs.makeQualified(tablePath),
  partPath);

  if (partitionName != null) {
PartitionResult pr = new PartitionResult();
pr.setPartitionName(partitionName);
pr.setTableName(table.getTableName());

result.getPartitionsNotInMs().add(pr);
  }
}
  }

  boolean isVaildPartitionPath(Table table,Path partpath){
Path tablePath = table.getPath();
String partpathinfo=partpath.toString();
String 
partinfo=partpathinfo.substring(tablePath.toString().length()+1,partpathinfo.length());
if(partinfo==null||"".equals(partinfo)){
  return false;
}
String[] parts=partinfo.split("/");
if(parts==null||parts.length==0){
  return false;
}
Map partsmap=new java.util.HashMap();
for(String part:parts){
  int index=part.indexOf("=");
  if(index<0){
continue;
  }
  String partname=part.substring(0,index);
  partsmap.put(partname,partname);
}
for (FieldSchema field : table.getPartCols()) {
  String val = partsmap.get(field.getName());
  if (val == null || val.isEmpty()) {
return false;
  }
}
return true;
  }
{code}

Let me submit the patche.
 

>  MSCK REPAIR TABLE Throw MetaException
> --
>
> Key: HIVE-18582
> URL: https://issues.apache.org/jira/browse/HIVE-18582
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.1.1
>Reporter: liubangchen
>Priority: Major
>
> 

[jira] [Updated] (HIVE-18582) MSCK REPAIR TABLE Throw MetaException

2018-01-30 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HIVE-18582:
---
Status: Patch Available  (was: Open)

>  MSCK REPAIR TABLE Throw MetaException
> --
>
> Key: HIVE-18582
> URL: https://issues.apache.org/jira/browse/HIVE-18582
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.1.1
>Reporter: liubangchen
>Priority: Major
>
> while executing query MSCK REPAIR TABLE tablename I got Exception:
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1847)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:402)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> --
> Caused by: MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at 
> org.apache.hadoop.hive.metastore.Warehouse.makeValsFromName(Warehouse.java:385)
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1845)
> {code}
> table PARTITIONED by (log_date,vgameid)
> The data file on HDFS is:
>  
> {code:java}
> /usr/hive/warehouse/a.db/tablename/log_date=2015063023
> drwxr-xr-x - root supergroup 0 2018-01-26 09:41 
> /usr/hive/warehouse/a.db/tablename/log_date=2015121309/vgameid=lyjt
> {code}
> The subdir of log_data=2015063023 is empty
> If i set  hive.msck.path.validation=ignore Then msck repair table will 
> executed ok.
> Then I found code like this:
> {code:java}
> private int msck(Hive db, MsckDesc msckDesc) {
>   CheckResult result = new CheckResult();
>   List repairOutput = new ArrayList();
>   try {
> HiveMetaStoreChecker checker = new HiveMetaStoreChecker(db);
> String[] names = Utilities.getDbTableName(msckDesc.getTableName());
> checker.checkMetastore(names[0], names[1], msckDesc.getPartSpecs(), 
> result);
> List partsNotInMs = 
> result.getPartitionsNotInMs();
> if (msckDesc.isRepairPartitions() && !partsNotInMs.isEmpty()) {
>  //I think bug is here
>   AbstractList vals = null;
>   String settingStr = HiveConf.getVar(conf, 
> HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION);
>   boolean doValidate = !("ignore".equals(settingStr));
>   boolean doSkip = doValidate && "skip".equals(settingStr);
>   // The default setting is "throw"; assume doValidate && !doSkip means 
> throw.
>   if (doValidate) {
> // Validate that we can add partition without escaping. Escaping was 
> originally intended
> // to avoid creating invalid HDFS paths; however, if we escape the 
> HDFS path (that we
> // deem invalid but HDFS actually supports - it is possible to create 
> HDFS paths with
> // unprintable characters like ASCII 7), metastore will create 
> another directory instead
> // of the one we are trying to "repair" here.
> Iterator iter = partsNotInMs.iterator();
> while (iter.hasNext()) {
>   CheckResult.PartitionResult part = iter.next();
>   try {
> vals = Warehouse.makeValsFromName(part.getPartitionName(), vals);
>   } catch (MetaException ex) {
> throw new HiveException(ex);
>   }
>   for (String val : vals) {
> String escapedPath = FileUtils.escapePathName(val);
> assert escapedPath != null;
> if (escapedPath.equals(val)) continue;
> String errorMsg = "Repair: Cannot add partition " + 
> msckDesc.getTableName()
> + ':' + part.getPartitionName() + " due to invalid characters 
> in the name";
> if (doSkip) {
>   repairOutput.add(errorMsg);
>   iter.remove();
> } else {
>   throw new HiveException(errorMsg);
> }
>   }
> }
>   }
> {code}
> I think  AbstractList vals = null; must placed after  "while 
> (iter.hasNext()) {" will work ok.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18582) MSCK REPAIR TABLE Throw MetaException

2018-01-30 Thread liubangchen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346386#comment-16346386
 ] 

liubangchen commented on HIVE-18582:


We can add a method to valid the method findUnknownPartitions of class 
HiveMetaStoreChecker 

 
{code:java}

void findUnknownPartitions(Table table, Set partPaths,
  CheckResult result) throws IOException, HiveException {

Path tablePath = table.getPath();
// now check the table folder and see if we find anything
// that isn't in the metastore
Set allPartDirs = new HashSet();
getAllLeafDirs(tablePath, allPartDirs);
// don't want the table dir
allPartDirs.remove(tablePath);

// remove the partition paths we know about
allPartDirs.removeAll(partPaths);

// we should now only have the unexpected folders left
for (Path partPath : allPartDirs) {
  if(!isVaildPartitionPath(table,partPath)){
LOG.warn("invalid data path:"+partPath.toString());
continue;
  }
  FileSystem fs = partPath.getFileSystem(conf);
  String partitionName = getPartitionName(fs.makeQualified(tablePath),
  partPath);

  if (partitionName != null) {
PartitionResult pr = new PartitionResult();
pr.setPartitionName(partitionName);
pr.setTableName(table.getTableName());

result.getPartitionsNotInMs().add(pr);
  }
}
  }

  boolean isVaildPartitionPath(Table table,Path partpath){
Path tablePath = table.getPath();
String partpathinfo=partpath.toString();
String 
partinfo=partpathinfo.substring(tablePath.toString().length()+1,partpathinfo.length());
if(partinfo==null||"".equals(partinfo)){
  return false;
}
String[] parts=partinfo.split("/");
if(parts==null||parts.length==0){
  return false;
}
Map partsmap=new java.util.HashMap();
for(String part:parts){
  int index=part.indexOf("=");
  if(index<0){
continue;
  }
  String partname=part.substring(0,index);
  partsmap.put(partname,partname);
}
for (FieldSchema field : table.getPartCols()) {
  String val = partsmap.get(field.getName());
  if (val == null || val.isEmpty()) {
return false;
  }
}
return true;
  }
{code}

Let me submit the patche.
 

>  MSCK REPAIR TABLE Throw MetaException
> --
>
> Key: HIVE-18582
> URL: https://issues.apache.org/jira/browse/HIVE-18582
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.1.1
>Reporter: liubangchen
>Priority: Major
>
> while executing query MSCK REPAIR TABLE tablename I got Exception:
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1847)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:402)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> --
> Caused by: MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at 
> org.apache.hadoop.hive.metastore.Warehouse.makeValsFromName(Warehouse.java:385)
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1845)
> {code}
> table PARTITIONED by (log_date,vgameid)
> The data file on HDFS is:
>  
> {code:java}
> /usr/hive/warehouse/a.db/tablename/log_date=2015063023
> drwxr-xr-x - root supergroup 0 2018-01-26 09:41 
> /usr/hive/warehouse/a.db/tablename/log_date=2015121309/vgameid=lyjt
> {code}
> The subdir of log_data=2015063023 is empty
> If i set  hive.msck.path.validation=ignore Then msck repair table will 
> executed ok.
> Then I found code like this:
> {code:java}
> private int msck(Hive db, MsckDesc msckDesc) {
>   CheckResult result = new CheckResult();
>   List repairOutput = new ArrayList();
>   try {
> HiveMetaStoreChecker checker = new HiveMetaStoreChecker(db);
> String[] names = Utilities.getDbTableName(msckDesc.getTableName());
> checker.checkMetastore(names[0], names[1], msckDesc.getPartSpecs(), 
> result);
> List partsNotInMs = 
> result.getPartitionsNotInMs();
> if (msckDesc.isRepairPartitions() && !partsNotInMs.isEmpty()) {
>  //I think bug is here
>   AbstractList vals = null;
>   String settingStr 

[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-30 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346383#comment-16346383
 ] 

Ferdinand Xu commented on HIVE-18553:
-

At my current understanding, rename or type conversion may require index 
access. It may not work for the current workaround for vectorization path. We 
can spend some time to investigate it for fully support. The current patch can 
be considered as a quick workaround. Any thoughts on this?

> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18553.2.patch, HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] INT32] 3
> at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore.getPageReader(ColumnChunkPageReadStore.java:160)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:479)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:432)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:393)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:345)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:88)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>  ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
>   

[jira] [Commented] (HIVE-18536) IOW + DP is broken for insert-only ACID

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346368#comment-16346368
 ] 

Hive QA commented on HIVE-18536:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} common: The patch generated 1 new + 6 unchanged - 1 
fixed = 7 total (was 7) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 1 new + 359 unchanged - 3 
fixed = 360 total (was 362) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3e4adaa |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8941/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8941/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8941/yetus/whitespace-eol.txt 
|
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8941/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> IOW + DP is broken for insert-only ACID
> ---
>
> Key: HIVE-18536
> URL: https://issues.apache.org/jira/browse/HIVE-18536
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18536.01.patch, HIVE-18536.02.patch, 
> HIVE-18536.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18552) Split hive.strict.checks.large.query into two configs

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346351#comment-16346351
 ] 

Hive QA commented on HIVE-18552:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908453/HIVE-18552.3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 12863 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestGetPartitions.testGetPartitionWithAuthInfoNoDbName[Embedded]
 (batchId=206)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=206)
org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded]
 (batchId=206)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hadoop.hive.ql.parse.TestQBSubQuery.testCheckAggOrWindowing 
(batchId=276)
org.apache.hadoop.hive.ql.parse.TestQBSubQuery.testExtractConjuncts 
(batchId=276)
org.apache.hadoop.hive.ql.parse.TestQBSubQuery.testExtractSubQueries 
(batchId=276)
org.apache.hadoop.hive.ql.parse.TestQBSubQuery.testRewriteOuterQueryWhere 
(batchId=276)
org.apache.hadoop.hive.ql.parse.TestQBSubQuery.testRewriteOuterQueryWhere2 
(batchId=276)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testRenamePartitionWithCM
 (batchId=228)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8940/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8940/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8940/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 31 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908453 - PreCommit-HIVE-Build

> Split hive.strict.checks.large.query into two configs
> -
>
> Key: HIVE-18552
> URL: https://issues.apache.org/jira/browse/HIVE-18552
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18552.1.patch, HIVE-18552.2.patch, 
> HIVE-18552.3.patch
>
>
> {{hive.strict.checks.large.query}} controls the strict checks for restricting 
> order bys with no limits, and scans of a partitioned table without a filter 
> on the partition table.
> While both checks prevent "large" queries from being run, they both control 
> very different behavior. It would be better if users could control these 
> restrictions separately.
> Furthermore, 

[jira] [Commented] (HIVE-18031) Support replication for Alter Database operation.

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346321#comment-16346321
 ] 

ASF GitHub Bot commented on HIVE-18031:
---

Github user sankarh closed the pull request at:

https://github.com/apache/hive/pull/280


> Support replication for Alter Database operation.
> -
>
> Key: HIVE-18031
> URL: https://issues.apache.org/jira/browse/HIVE-18031
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-18031.01.patch, HIVE-18031.02.patch
>
>
> Currently alter database operations to alter the database properties or owner 
> info are not generating any events due to which it is not getting replicated.
> Need to add an event for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18478) Drop of temp table creating recycle files at CM path

2018-01-30 Thread mahesh kumar behera (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-18478:
---
Attachment: HIVE-18478.03.patch

> Drop of temp table creating recycle files at CM path
> 
>
> Key: HIVE-18478
> URL: https://issues.apache.org/jira/browse/HIVE-18478
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive, HiveServer2
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18478.01.patch, HIVE-18478.02.patch, 
> HIVE-18478.03.patch
>
>
> Drop TEMP table operation invokes deleteDir which moves the file to $CMROOT 
> which is not needed as temp tables need not be replicated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18552) Split hive.strict.checks.large.query into two configs

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346317#comment-16346317
 ] 

Hive QA commented on HIVE-18552:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3e4adaa |
| Default Java | 1.8.0_111 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8940/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Split hive.strict.checks.large.query into two configs
> -
>
> Key: HIVE-18552
> URL: https://issues.apache.org/jira/browse/HIVE-18552
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18552.1.patch, HIVE-18552.2.patch, 
> HIVE-18552.3.patch
>
>
> {{hive.strict.checks.large.query}} controls the strict checks for restricting 
> order bys with no limits, and scans of a partitioned table without a filter 
> on the partition table.
> While both checks prevent "large" queries from being run, they both control 
> very different behavior. It would be better if users could control these 
> restrictions separately.
> Furthermore, many users make the mistake of abusing partitioned tables and 
> often end up in a situation where they are running queries that are doing 
> full-table scans of partitioned tables. This can lead to lots of issues for 
> Hive - e.g. OOM issues because so many partitions are loaded in memory. So it 
> would be good if we enabled this restriction by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18513) Query results caching

2018-01-30 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346306#comment-16346306
 ] 

Jason Dere commented on HIVE-18513:
---

Uploaded patch v4 with updates from RB comments.

> Query results caching
> -
>
> Key: HIVE-18513
> URL: https://issues.apache.org/jira/browse/HIVE-18513
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18513.1.patch, HIVE-18513.2.patch, 
> HIVE-18513.3.patch, HIVE-18513.4.patch
>
>
> Add a query results cache that can save the results of an executed Hive query 
> for reuse on subsequent queries. This may be useful in cases where the same 
> query is issued many times, since Hive can return back the results of a 
> cached query rather than having to execute the full query on the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18513) Query results caching

2018-01-30 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346305#comment-16346305
 ] 

Jason Dere commented on HIVE-18513:
---

Created a doc at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75963441, 
sorry I forgot to post that to this Jira.

> Query results caching
> -
>
> Key: HIVE-18513
> URL: https://issues.apache.org/jira/browse/HIVE-18513
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18513.1.patch, HIVE-18513.2.patch, 
> HIVE-18513.3.patch, HIVE-18513.4.patch
>
>
> Add a query results cache that can save the results of an executed Hive query 
> for reuse on subsequent queries. This may be useful in cases where the same 
> query is issued many times, since Hive can return back the results of a 
> cached query rather than having to execute the full query on the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18513) Query results caching

2018-01-30 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-18513:
--
Attachment: HIVE-18513.4.patch

> Query results caching
> -
>
> Key: HIVE-18513
> URL: https://issues.apache.org/jira/browse/HIVE-18513
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18513.1.patch, HIVE-18513.2.patch, 
> HIVE-18513.3.patch, HIVE-18513.4.patch
>
>
> Add a query results cache that can save the results of an executed Hive query 
> for reuse on subsequent queries. This may be useful in cases where the same 
> query is issued many times, since Hive can return back the results of a 
> cached query rather than having to execute the full query on the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15631) Optimize for hive client logs , you can filter the log for each session itself.

2018-01-30 Thread tartarus (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346298#comment-16346298
 ] 

tartarus commented on HIVE-15631:
-

[~prasanth_j]  yes,I want to display session id on the console.

Session ID only in the log file, how to quickly associate the corresponding log 
file, my implementation is through the Session ID.

I create a new issue 
[HIVE-18543|https://issues.apache.org/jira/projects/HIVE/issues/HIVE-18543?filter=allopenissues]

 

> Optimize for hive client logs , you can filter the log for each session 
> itself.
> ---
>
> Key: HIVE-15631
> URL: https://issues.apache.org/jira/browse/HIVE-15631
> Project: Hive
>  Issue Type: Improvement
>  Components: CLI, Clients, Hive
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-15631.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> We have several hadoop cluster, about 15 thousand nodes. Every day we use 
> hive to submit above 100 thousand jobs. 
> So we have a large file of hive logs on every client host every day, but i 
> don not know the logs of my session submitted was which line. 
> So i hope to print the hive.session.id on every line of logs, and then i 
> could use grep to find the logs of my session submitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18587) insert DML event may attempt to calculate a checksum on directories

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346292#comment-16346292
 ] 

Hive QA commented on HIVE-18587:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908449/HIVE-18587.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 12862 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded]
 (batchId=206)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.mapreduce.TestHCatOutputFormat.testGetTableSchema 
(batchId=200)
org.apache.hive.hcatalog.mapreduce.TestHCatOutputFormat.testSetOutput 
(batchId=200)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8939/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8939/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8939/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 22 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908449 - PreCommit-HIVE-Build

> insert DML event may attempt to calculate a checksum on directories
> ---
>
> Key: HIVE-18587
> URL: https://issues.apache.org/jira/browse/HIVE-18587
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18587.patch
>
>
> Looks like in union case, some code path may pass directories in newFiles. 
> Probably legacy copyData/moveData; both seem to assume that these paths are 
> files, but do not actually enforce it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18589) java.io.IOException: Not enough history available

2018-01-30 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346282#comment-16346282
 ] 

Alexander Kolbasov commented on HIVE-18589:
---

What is the problem? Please add description.

> java.io.IOException: Not enough history available
> -
>
> Key: HIVE-18589
> URL: https://issues.apache.org/jira/browse/HIVE-18589
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18589.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18590) Assertion error on transitive join inference in the presence of NOT NULL constraint

2018-01-30 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346281#comment-16346281
 ] 

Jesus Camacho Rodriguez commented on HIVE-18590:


[~ashutoshc], could you take a look? Thanks

> Assertion error on transitive join inference in the presence of NOT NULL 
> constraint
> ---
>
> Key: HIVE-18590
> URL: https://issues.apache.org/jira/browse/HIVE-18590
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18590.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18590) Assertion error on transitive join inference in the presence of NOT NULL constraint

2018-01-30 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18590:
---
Status: Patch Available  (was: In Progress)

> Assertion error on transitive join inference in the presence of NOT NULL 
> constraint
> ---
>
> Key: HIVE-18590
> URL: https://issues.apache.org/jira/browse/HIVE-18590
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18590.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18590) Assertion error on transitive join inference in the presence of NOT NULL constraint

2018-01-30 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18590:
---
Attachment: HIVE-18590.patch

> Assertion error on transitive join inference in the presence of NOT NULL 
> constraint
> ---
>
> Key: HIVE-18590
> URL: https://issues.apache.org/jira/browse/HIVE-18590
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18590.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-18590) Assertion error on transitive join inference in the presence of NOT NULL constraint

2018-01-30 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-18590 started by Jesus Camacho Rodriguez.
--
> Assertion error on transitive join inference in the presence of NOT NULL 
> constraint
> ---
>
> Key: HIVE-18590
> URL: https://issues.apache.org/jira/browse/HIVE-18590
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18590.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18590) Assertion error on transitive join inference in the presence of NOT NULL constraint

2018-01-30 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-18590:
--


> Assertion error on transitive join inference in the presence of NOT NULL 
> constraint
> ---
>
> Key: HIVE-18590
> URL: https://issues.apache.org/jira/browse/HIVE-18590
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18259) Automatic cleanup of invalidation cache for materialized views

2018-01-30 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346273#comment-16346273
 ] 

Ashutosh Chauhan commented on HIVE-18259:
-

+1

> Automatic cleanup of invalidation cache for materialized views
> --
>
> Key: HIVE-18259
> URL: https://issues.apache.org/jira/browse/HIVE-18259
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18259.patch
>
>
> HIVE-14498 introduces the invalidation cache for materialized views, which 
> keeps track of the transactions executed on a given table to infer whether 
> materialized view contents are outdated or not.
> Currently, the cache keeps information of transactions in memory to guarantee 
> quick response time, i.e., quick resolution about the view freshness, at 
> query rewriting time. This information can grow large, thus we would like to 
> run a thread that cleans useless transactions from the cache, i.e., 
> transactions that do invalidate any materialized view in the system, at an 
> interval defined by a property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18516) load data should rename files consistent with insert statements for ACID Tables

2018-01-30 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18516:
--
Attachment: HIVE-18516.6.patch

> load data should rename files consistent with insert statements for ACID 
> Tables
> ---
>
> Key: HIVE-18516
> URL: https://issues.apache.org/jira/browse/HIVE-18516
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18516.1.patch, HIVE-18516.2.patch, 
> HIVE-18516.3.patch, HIVE-18516.4.patch, HIVE-18516.5.patch, HIVE-18516.6.patch
>
>
> h1. load data should rename files consistent with insert statements for ACID 
> Tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17331) Path must be used as key type of the pathToAlises

2018-01-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-17331:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Oleg!

> Path must be used as key type of the pathToAlises
> -
>
> Key: HIVE-17331
> URL: https://issues.apache.org/jira/browse/HIVE-17331
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Assignee: Oleg Danilov
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-17331.2.patch, HIVE-17331.3.patch, 
> HIVE-17331.4.patch, HIVE-17331.patch
>
>
> This code uses String instead of Path as key type of the pathToAliases map, 
> so seems like get(String) always null.
> +*GenMapRedUtils.java*+
> {code:java}
> for (int pos = 0; pos < size; pos++) {
>   String taskTmpDir = taskTmpDirLst.get(pos);
>   TableDesc tt_desc = tt_descLst.get(pos);
>   MapWork mWork = plan.getMapWork();
>   if (mWork.getPathToAliases().get(taskTmpDir) == null) {
> taskTmpDir = taskTmpDir.intern();
> Path taskTmpDirPath = 
> StringInternUtils.internUriStringsInPath(new Path(taskTmpDir));
> mWork.removePathToAlias(taskTmpDirPath);
> mWork.addPathToAlias(taskTmpDirPath, taskTmpDir);
> mWork.addPathToPartitionInfo(taskTmpDirPath, new 
> PartitionDesc(tt_desc, null));
> mWork.getAliasToWork().put(taskTmpDir, topOperators.get(pos));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18587) insert DML event may attempt to calculate a checksum on directories

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346258#comment-16346258
 ] 

Hive QA commented on HIVE-18587:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 2 new + 236 unchanged - 1 
fixed = 238 total (was 237) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 123e2eb |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8939/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8939/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> insert DML event may attempt to calculate a checksum on directories
> ---
>
> Key: HIVE-18587
> URL: https://issues.apache.org/jira/browse/HIVE-18587
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18587.patch
>
>
> Looks like in union case, some code path may pass directories in newFiles. 
> Probably legacy copyData/moveData; both seem to assume that these paths are 
> files, but do not actually enforce it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18589) java.io.IOException: Not enough history available

2018-01-30 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18589:
--
Attachment: HIVE-18589.01.patch

> java.io.IOException: Not enough history available
> -
>
> Key: HIVE-18589
> URL: https://issues.apache.org/jira/browse/HIVE-18589
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18589.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18589) java.io.IOException: Not enough history available

2018-01-30 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18589:
--
Status: Patch Available  (was: Open)

> java.io.IOException: Not enough history available
> -
>
> Key: HIVE-18589
> URL: https://issues.apache.org/jira/browse/HIVE-18589
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18589.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18589) java.io.IOException: Not enough history available

2018-01-30 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reassigned HIVE-18589:
-


> java.io.IOException: Not enough history available
> -
>
> Key: HIVE-18589
> URL: https://issues.apache.org/jira/browse/HIVE-18589
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17396) Support DPP with map joins where the source and target belong in the same stage

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346247#comment-16346247
 ] 

Hive QA commented on HIVE-17396:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908438/HIVE-17396.8.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 12862 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_recursive_mapjoin]
 (batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.common.TestHiveClientCache.testCloseAllClients 
(batchId=200)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropDatabase 
(batchId=242)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8938/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8938/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8938/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 22 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908438 - PreCommit-HIVE-Build

> Support DPP with map joins where the source and target belong in the same 
> stage
> ---
>
> Key: HIVE-17396
> URL: https://issues.apache.org/jira/browse/HIVE-17396
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17396.1.patch, HIVE-17396.2.patch, 
> HIVE-17396.3.patch, HIVE-17396.4.patch, HIVE-17396.5.patch, 
> HIVE-17396.6.patch, HIVE-17396.7.patch, HIVE-17396.8.patch
>
>
> When the target of a partition pruning sink operator is in not the same as 
> the target of hash table sink operator, both source and target gets scheduled 
> within the same spark job, and that can result in File Not Found Exception.  
> HIVE-17225 has a fix to disable DPP in that scenario.  This JIRA is to 
> support DPP for such cases.
> Test Case:
> SET hive.spark.dynamic.partition.pruning=true;
> SET hive.auto.convert.join=true;
> SET hive.strict.checks.cartesian.product=false;
> CREATE TABLE part_table1 (col int) PARTITIONED BY (part1_col int);
> CREATE TABLE part_table2 (col int) PARTITIONED BY (part2_col int);
> CREATE TABLE reg_table (col int);
> ALTER TABLE part_table1 ADD PARTITION (part1_col = 1);
> ALTER TABLE part_table2 ADD PARTITION (part2_col = 1);
> ALTER TABLE part_table2 ADD PARTITION (part2_col = 2);
> INSERT INTO TABLE part_table1 PARTITION (part1_col = 1) VALUES (1);
> INSERT INTO TABLE part_table2 PARTITION (part2_col = 1) VALUES (1);
> INSERT INTO TABLE part_table2 PARTITION (part2_col = 2) VALUES (2);
> INSERT INTO table 

[jira] [Assigned] (HIVE-18581) Replication events should use lower case db object names

2018-01-30 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek reassigned HIVE-18581:
--

Assignee: anishek

> Replication events should use lower case db object names
> 
>
> Key: HIVE-18581
> URL: https://issues.apache.org/jira/browse/HIVE-18581
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: events generated by replication should include the 
> database /  tables /  partitions / function names in lower case. this will 
> prevent other applications to explicitly do case insensitive match of objects 
> using names. in hive all db object names as specified above are explicitly 
> converted to lower case when comparing between objects of same types. 
>Reporter: anishek
>Assignee: anishek
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18467) support whole warehouse dump / load + create/drop database events

2018-01-30 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-18467:
---
Attachment: HIVE-18467.0.patch

> support whole warehouse dump / load + create/drop database events
> -
>
> Key: HIVE-18467
> URL: https://issues.apache.org/jira/browse/HIVE-18467
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18467.0.patch
>
>
> A complete hive warehouse might be required to replicate to a DR site for 
> certain use cases and rather than allowing only a database name in the REPL 
> DUMP commands, we should allow dumping of all databases using the "*" option 
> as in 
> _REPL DUMP *_ 
> On the repl  load side there will not be an option to specify the database 
> name when loading from a location used to dump multiple databases, hence only 
> _REPL LOAD FROM [location]_ would be supported when dumping via _REPL DUMP *_
> Additionally, incremental dumps will go through all events across databases 
> in a warehouse and hence CREATE / DROP Database events have to be serialized 
> correctly to allow repl load to create them correctly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18467) support whole warehouse dump / load + create/drop database events

2018-01-30 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-18467:
---
Attachment: (was: HIVE-18467.0.patch)

> support whole warehouse dump / load + create/drop database events
> -
>
> Key: HIVE-18467
> URL: https://issues.apache.org/jira/browse/HIVE-18467
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18467.0.patch
>
>
> A complete hive warehouse might be required to replicate to a DR site for 
> certain use cases and rather than allowing only a database name in the REPL 
> DUMP commands, we should allow dumping of all databases using the "*" option 
> as in 
> _REPL DUMP *_ 
> On the repl  load side there will not be an option to specify the database 
> name when loading from a location used to dump multiple databases, hence only 
> _REPL LOAD FROM [location]_ would be supported when dumping via _REPL DUMP *_
> Additionally, incremental dumps will go through all events across databases 
> in a warehouse and hence CREATE / DROP Database events have to be serialized 
> correctly to allow repl load to create them correctly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17396) Support DPP with map joins where the source and target belong in the same stage

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346212#comment-16346212
 ] 

Hive QA commented on HIVE-17396:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
36s{color} | {color:red} ql: The patch generated 1 new + 3 unchanged - 0 fixed 
= 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 123e2eb |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8938/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8938/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support DPP with map joins where the source and target belong in the same 
> stage
> ---
>
> Key: HIVE-17396
> URL: https://issues.apache.org/jira/browse/HIVE-17396
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17396.1.patch, HIVE-17396.2.patch, 
> HIVE-17396.3.patch, HIVE-17396.4.patch, HIVE-17396.5.patch, 
> HIVE-17396.6.patch, HIVE-17396.7.patch, HIVE-17396.8.patch
>
>
> When the target of a partition pruning sink operator is in not the same as 
> the target of hash table sink operator, both source and target gets scheduled 
> within the same spark job, and that can result in File Not Found Exception.  
> HIVE-17225 has a fix to disable DPP in that scenario.  This JIRA is to 
> support DPP for such cases.
> Test Case:
> SET hive.spark.dynamic.partition.pruning=true;
> SET hive.auto.convert.join=true;
> SET hive.strict.checks.cartesian.product=false;
> CREATE TABLE part_table1 (col int) PARTITIONED BY (part1_col int);
> CREATE TABLE part_table2 (col int) PARTITIONED BY (part2_col int);
> CREATE TABLE reg_table (col int);
> ALTER TABLE part_table1 ADD PARTITION (part1_col = 1);
> ALTER TABLE part_table2 ADD PARTITION (part2_col = 1);
> ALTER TABLE part_table2 ADD 

[jira] [Commented] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark

2018-01-30 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346196#comment-16346196
 ] 

Rui Li commented on HIVE-18301:
---

Hi [~kellyzly], is the input path the only thing we need to store with cached 
RDD? The IOContext has quite a few other fields. I wonder whether they are 
available if the RDD is cached.

> Investigate to enable MapInput cache in Hive on Spark
> -
>
> Key: HIVE-18301
> URL: https://issues.apache.org/jira/browse/HIVE-18301
> Project: Hive
>  Issue Type: Bug
>Reporter: liyunzhang
>Assignee: liyunzhang
>Priority: Major
> Attachments: HIVE-18301.1.patch, HIVE-18301.patch
>
>
> Before IOContext problem is found in MapTran when spark rdd cache is enabled 
> in HIVE-8920.
> so we disabled rdd cache in MapTran at 
> [SparkPlanGenerator|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java#L202].
>   The problem is IOContext seems not initialized correctly in the spark yarn 
> client/cluster mode and caused the exception like 
> {code}
> Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 93.3 in stage 0.0 (TID 616, bdpe48): 
> java.lang.RuntimeException: Error processing row: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:165)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
>   at 
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>   at org.apache.spark.scheduler.Task.run(Task.scala:85)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.AbstractMapOperator.getNominalPath(AbstractMapOperator.java:101)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:516)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1187)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:152)
>   ... 12 more
> Driver stacktrace:
> {code}
> in yarn client/cluster mode, sometimes 
> [ExecMapperContext#currentInputPath|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java#L109]
>  is null when rdd cach is enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346194#comment-16346194
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908434/HIVE-18192.06.patch

{color:green}SUCCESS:{color} +1 due to 20 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 116 failed/errored test(s), 12862 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=248)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_tmp_table] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_tmp_table] 
(batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_acid] (batchId=81)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] 
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_default] (batchId=83)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_loaddata] (batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=248)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[schemeAuthority2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[delete_tmp_table]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_tmp_table]
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_dummy]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mm_conversions]
 (batchId=169)

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346165#comment-16346165
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  8s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 30 new + 22 unchanged 
- 6 fixed = 52 total (was 28) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
2s{color} | {color:red} ql: The patch generated 71 new + 2634 unchanged - 43 
fixed = 2705 total (was 2677) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hcatalog/streaming: The patch generated 26 new + 463 
unchanged - 20 fixed = 489 total (was 483) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} itests/hive-unit: The patch generated 9 new + 73 
unchanged - 9 fixed = 82 total (was 82) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
49s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 123e2eb |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/whitespace-eol.txt 
|
| 

[jira] [Updated] (HIVE-18543) Add print sessionid in console

2018-01-30 Thread tartarus (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tartarus updated HIVE-18543:

Attachment: HIVE_18543.patch

> Add print sessionid in console
> --
>
> Key: HIVE-18543
> URL: https://issues.apache.org/jira/browse/HIVE-18543
> Project: Hive
>  Issue Type: Improvement
>  Components: CLI, Clients
>Affects Versions: 2.3.2
> Environment: CentOS6.5
> Hive-1.2.1
> Hive-2.3.2
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE_18543.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Hive client log file already contains sessionid information, but the console 
> does not have sessionid information, the user can not be associated with the 
> log well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18543) Add print sessionid in console

2018-01-30 Thread tartarus (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tartarus updated HIVE-18543:

Attachment: (was: HIVE_18543.patch)

> Add print sessionid in console
> --
>
> Key: HIVE-18543
> URL: https://issues.apache.org/jira/browse/HIVE-18543
> Project: Hive
>  Issue Type: Improvement
>  Components: CLI, Clients
>Affects Versions: 2.3.2
> Environment: CentOS6.5
> Hive-1.2.1
> Hive-2.3.2
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Hive client log file already contains sessionid information, but the console 
> does not have sessionid information, the user can not be associated with the 
> log well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18571) stats issues for MM tables

2018-01-30 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346141#comment-16346141
 ] 

Sergey Shelukhin commented on HIVE-18571:
-

Rebased the patch and fixed the issues.

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18571) stats issues for MM tables

2018-01-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18571:

Attachment: HIVE-18571.01.patch

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18586) Upgrade Derby to 10.14.1.0

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346132#comment-16346132
 ] 

Hive QA commented on HIVE-18586:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908427/HIVE-18586.1.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 12862 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8936/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8936/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8936/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 21 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908427 - PreCommit-HIVE-Build

> Upgrade Derby to 10.14.1.0
> --
>
> Key: HIVE-18586
> URL: https://issues.apache.org/jira/browse/HIVE-18586
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-18586.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18048) Vectorization: Support Struct type with vectorization

2018-01-30 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HIVE-18048:

Description: 
Struct type is not supported in MapWork with vectorization, it should be 
supported to improve the performance.
 New UDF will be added to access the field of Struct.

Note:
 * Nested complex type won't be tested in this ticket.

  was:
Struct type is not supported in MapWork with vectorization, it should be 
supported to improve the performance.
 New UDF will be added to access the field of Struct.

Note:
 * Filter operator won't be supported.
 * Nested complex type won't be tested in this ticket.


> Vectorization: Support Struct type with vectorization
> -
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Improvement
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch, HIVE-18048.005.patch, 
> HIVE-18048.006.patch, HIVE-18048.007.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
>  New UDF will be added to access the field of Struct.
> Note:
>  * Nested complex type won't be tested in this ticket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18048) Vectorization: Support Struct type with vectorization

2018-01-30 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HIVE-18048:

Attachment: HIVE-18048.007.patch

> Vectorization: Support Struct type with vectorization
> -
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Improvement
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch, HIVE-18048.005.patch, 
> HIVE-18048.006.patch, HIVE-18048.007.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
>  New UDF will be added to access the field of Struct.
> Note:
>  * Filter operator won't be supported.
>  * Nested complex type won't be tested in this ticket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18586) Upgrade Derby to 10.14.1.0

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346108#comment-16346108
 ] 

Hive QA commented on HIVE-18586:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} standalone-metastore: The patch generated 2 new + 21 
unchanged - 2 fixed = 23 total (was 23) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hcatalog/core: The patch generated 3 new + 17 
unchanged - 0 fixed = 20 total (was 17) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
48s{color} | {color:red} root: The patch generated 5 new + 146 unchanged - 2 
fixed = 151 total (was 148) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 6bde1ed |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8936/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8936/yetus/diff-checkstyle-hcatalog_core.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8936/yetus/diff-checkstyle-root.txt
 |
| modules | C: standalone-metastore hcatalog/core hcatalog/webhcat/java-client 
. U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8936/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade Derby to 10.14.1.0
> --
>
> Key: HIVE-18586
> URL: https://issues.apache.org/jira/browse/HIVE-18586
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-18586.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18585:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18585.1.patch, HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346072#comment-16346072
 ] 

Jesus Camacho Rodriguez commented on HIVE-18585:


+1

> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18585.1.patch, HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18585:

Status: Patch Available  (was: Open)

Patch updated with golden files. 

[~jcamachorodriguez] Can you please review?

> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18585.1.patch, HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18585:

Status: Open  (was: Patch Available)

> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18585.1.patch, HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18585:

Attachment: HIVE-18585.1.patch

> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18585.1.patch, HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17983) Make the standalone metastore generate tarballs etc.

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346058#comment-16346058
 ] 

Hive QA commented on HIVE-17983:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 16m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
38s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
40s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} beeline in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} standalone-metastore in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
18s{color} | {color:red} beeline in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 18s{color} 
| {color:red} beeline in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 14m  
4s{color} | {color:red} root: The patch generated 434 new + 71012 unchanged - 7 
fixed = 71446 total (was 71019) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} beeline: The patch generated 4 new + 690 unchanged - 4 
fixed = 694 total (was 694) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
16s{color} | {color:red} standalone-metastore: The patch generated 430 new + 
5443 unchanged - 3 fixed = 5873 total (was 5446) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 24 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 123 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  4m 
51s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
2s{color} | {color:red} beeline in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
1s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
2s{color} | {color:red} metastore in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
2s{color} | {color:red} packaging in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
1s{color} | {color:red} standalone-metastore in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m  
3s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 54s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Commented] (HIVE-17983) Make the standalone metastore generate tarballs etc.

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346055#comment-16346055
 ] 

Hive QA commented on HIVE-17983:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908404/HIVE-17983.3.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 12875 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8935/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8935/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8935/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908404 - PreCommit-HIVE-Build

> Make the standalone metastore generate tarballs etc.
> 
>
> Key: HIVE-17983
> URL: https://issues.apache.org/jira/browse/HIVE-17983
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17983.2.patch, HIVE-17983.3.patch, HIVE-17983.patch
>
>
> In order to be separately installable the standalone metastore needs its own 
> tarballs, startup scripts, etc.  All of the SQL installation and upgrade 
> scripts also need to move from metastore to standalone-metastore.
> I also plan to create Dockerfiles for different database types so that 
> developers can test the SQL installation and upgrade scripts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18588) Add 'checkin' profile that runs slower tests in standalone-metastore

2018-01-30 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates reassigned HIVE-18588:
-

Assignee: Alan Gates

> Add 'checkin' profile that runs slower tests in standalone-metastore
> 
>
> Key: HIVE-18588
> URL: https://issues.apache.org/jira/browse/HIVE-18588
> Project: Hive
>  Issue Type: Test
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>
> Runtime for unit tests in standalone-metastore are now exceeding 25 minutes.  
> Ideally unit tests should finish within 2-3 minutes so users will run them 
> frequently.  To solve this I propose to carve off many of the slower tests to 
> run in a new 'checkin' profile.  This profile should be run before checkin 
> and by the ptest infrastructure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18571) stats issues for MM tables

2018-01-30 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346042#comment-16346042
 ] 

Sergey Shelukhin commented on HIVE-18571:
-

Hmm, there's some issue with the patch... would need to take a look

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18536) IOW + DP is broken for insert-only ACID

2018-01-30 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346040#comment-16346040
 ] 

Sergey Shelukhin commented on HIVE-18536:
-

Rebased the patch. [~ekoifman] can you review the changes? 
https://reviews.apache.org/r/65356/diff/1-3/

> IOW + DP is broken for insert-only ACID
> ---
>
> Key: HIVE-18536
> URL: https://issues.apache.org/jira/browse/HIVE-18536
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18536.01.patch, HIVE-18536.02.patch, 
> HIVE-18536.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18536) IOW + DP is broken for insert-only ACID

2018-01-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18536:

Attachment: HIVE-18536.02.patch

> IOW + DP is broken for insert-only ACID
> ---
>
> Key: HIVE-18536
> URL: https://issues.apache.org/jira/browse/HIVE-18536
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18536.01.patch, HIVE-18536.02.patch, 
> HIVE-18536.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18459) hive-exec.jar leaks contents fb303.jar into classpath

2018-01-30 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-18459:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Fix has been committed to master. Thanks for the review [~aihuaxu]

> hive-exec.jar leaks contents fb303.jar into classpath
> -
>
> Key: HIVE-18459
> URL: https://issues.apache.org/jira/browse/HIVE-18459
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18459.patch, HIVE-18459.patch, HIVE-18459.patch
>
>
> thrift classes are now in the hive classpath in the hive-exec.jar 
> (HIVE-11553). This makes it hard to test with other versions of this library. 
> This library is already a declared dependency and is not required to be 
> included in the hive-exec.jar.
> I am proposing that we not include these classes like we have done in the 
> past releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18526) Backport HIVE-16886 to Hive 2

2018-01-30 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346007#comment-16346007
 ] 

Alexander Kolbasov commented on HIVE-18526:
---

Verified that the patch works for MS SQL Server.

> Backport HIVE-16886 to Hive 2
> -
>
> Key: HIVE-18526
> URL: https://issues.apache.org/jira/browse/HIVE-18526
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.3.3
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-18526.01-branch-2.patch, 
> HIVE-18526.02-branch-2.patch
>
>
> The fix for HIVE-16886 isn't in Hive 2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16886) HMS log notifications may have duplicated event IDs if multiple HMS are running concurrently

2018-01-30 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346006#comment-16346006
 ] 

Alexander Kolbasov commented on HIVE-16886:
---

[~anishek] I verified that the DataNucleus based patch works on MS SQL server 
(and that test fails without the patch). How should we proceed?

I'm mostly interested in branch-2 patch for now, but it would be best to have 
similar code in branch-2 and branch-3. Would the following work for you:
 * We put DataNucleus based patch in branch-2
 * We figure out the way to update branch 3 to have the same fix.

Or you'd rather prefer some other way forward?

> HMS log notifications may have duplicated event IDs if multiple HMS are 
> running concurrently
> 
>
> Key: HIVE-16886
> URL: https://issues.apache.org/jira/browse/HIVE-16886
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0, 2.3.2, 2.3.3
>Reporter: Sergio Peña
>Assignee: anishek
>Priority: Major
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-16886.1.patch, HIVE-16886.2.patch, 
> HIVE-16886.3.patch, HIVE-16886.4.patch, HIVE-16886.5.patch, 
> HIVE-16886.6.patch, HIVE-16886.7.patch, HIVE-16886.8.patch, 
> datastore-identity-holes.diff
>
>
> When running multiple Hive Metastore servers and DB notifications are 
> enabled, I could see that notifications can be persisted with a duplicated 
> event ID. 
> This does not happen when running multiple threads in a single HMS node due 
> to the locking acquired on the DbNotificationsLog class, but multiple HMS 
> could cause conflicts.
> The issue is in the ObjectStore#addNotificationEvent() method. The event ID 
> fetched from the datastore is used for the new notification, incremented in 
> the server itself, then persisted or updated back to the datastore. If 2 
> servers read the same ID, then these 2 servers write a new notification with 
> the same ID.
> The event ID is not unique nor a primary key.
> Here's a test case using the TestObjectStore class that confirms this issue:
> {noformat}
> @Test
>   public void testConcurrentAddNotifications() throws ExecutionException, 
> InterruptedException {
> final int NUM_THREADS = 2;
> CountDownLatch countIn = new CountDownLatch(NUM_THREADS);
> CountDownLatch countOut = new CountDownLatch(1);
> HiveConf conf = new HiveConf();
> conf.setVar(HiveConf.ConfVars.METASTORE_EXPRESSION_PROXY_CLASS, 
> MockPartitionExpressionProxy.class.getName());
> ExecutorService executorService = 
> Executors.newFixedThreadPool(NUM_THREADS);
> FutureTask tasks[] = new FutureTask[NUM_THREADS];
> for (int i=0; i   final int n = i;
>   tasks[i] = new FutureTask(new Callable() {
> @Override
> public Void call() throws Exception {
>   ObjectStore store = new ObjectStore();
>   store.setConf(conf);
>   NotificationEvent dbEvent =
>   new NotificationEvent(0, 0, 
> EventMessage.EventType.CREATE_DATABASE.toString(), "CREATE DATABASE DB" + n);
>   System.out.println("ADDING NOTIFICATION");
>   countIn.countDown();
>   countOut.await();
>   store.addNotificationEvent(dbEvent);
>   System.out.println("FINISH NOTIFICATION");
>   return null;
> }
>   });
>   executorService.execute(tasks[i]);
> }
> countIn.await();
> countOut.countDown();
> for (int i = 0; i < NUM_THREADS; ++i) {
>   tasks[i].get();
> }
> NotificationEventResponse eventResponse = 
> objectStore.getNextNotification(new NotificationEventRequest());
> Assert.assertEquals(2, eventResponse.getEventsSize());
> Assert.assertEquals(1, eventResponse.getEvents().get(0).getEventId());
> // This fails because the next notification has an event ID = 1
> Assert.assertEquals(2, eventResponse.getEvents().get(1).getEventId());
>   }
> {noformat}
> The last assertion fails expecting an event ID 1 instead of 2. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18237) missing results for insert_only table after DP insert

2018-01-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18237:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the patch!

> missing results for insert_only table after DP insert
> -
>
> Key: HIVE-18237
> URL: https://issues.apache.org/jira/browse/HIVE-18237
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Zoltan Haindrich
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18237.01.patch, HIVE-18237.02.patch, 
> HIVE-18237.03.patch, HIVE-18237.04.patch
>
>
> {code}
> set hive.stats.column.autogather=false;
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=200;
> set hive.exec.max.dynamic.partitions=200;
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create table i0 (p int,v int);
> insert into i0 values
> (0,0),
> (2,2),
> (3,3);
> create table p0 (v int) partitioned by (p int) stored as orc 
>   tblproperties ("transactional"="true", 
> "transactional_properties"="insert_only");
> explain insert overwrite table p0 partition (p) select * from i0 where v < 3;
> insert overwrite table p0 partition (p) select * from i0 where v < 3;
> select count(*) from p0 where v!=1;
> {code}
> The table p0 should contain {{2}} rows at this point; but the result is {{0}}.
> * seems to be specific to insert_only tables
> * the existing data appears if an {{insert into}} is executed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345976#comment-16345976
 ] 

Hive QA commented on HIVE-18585:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908402/HIVE-18585.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 12862 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_udf_max] (batchId=2)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_precision2] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_offcbo] 
(batchId=46)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query72] 
(batchId=250)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query72] 
(batchId=248)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=206)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8934/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8934/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8934/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 24 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908402 - PreCommit-HIVE-Build

> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18552) Split hive.strict.checks.large.query into two configs

2018-01-30 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18552:

Status: Open  (was: Patch Available)

> Split hive.strict.checks.large.query into two configs
> -
>
> Key: HIVE-18552
> URL: https://issues.apache.org/jira/browse/HIVE-18552
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18552.1.patch, HIVE-18552.2.patch, 
> HIVE-18552.3.patch
>
>
> {{hive.strict.checks.large.query}} controls the strict checks for restricting 
> order bys with no limits, and scans of a partitioned table without a filter 
> on the partition table.
> While both checks prevent "large" queries from being run, they both control 
> very different behavior. It would be better if users could control these 
> restrictions separately.
> Furthermore, many users make the mistake of abusing partitioned tables and 
> often end up in a situation where they are running queries that are doing 
> full-table scans of partitioned tables. This can lead to lots of issues for 
> Hive - e.g. OOM issues because so many partitions are loaded in memory. So it 
> would be good if we enabled this restriction by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18552) Split hive.strict.checks.large.query into two configs

2018-01-30 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18552:

Status: Patch Available  (was: Open)

> Split hive.strict.checks.large.query into two configs
> -
>
> Key: HIVE-18552
> URL: https://issues.apache.org/jira/browse/HIVE-18552
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18552.1.patch, HIVE-18552.2.patch, 
> HIVE-18552.3.patch
>
>
> {{hive.strict.checks.large.query}} controls the strict checks for restricting 
> order bys with no limits, and scans of a partitioned table without a filter 
> on the partition table.
> While both checks prevent "large" queries from being run, they both control 
> very different behavior. It would be better if users could control these 
> restrictions separately.
> Furthermore, many users make the mistake of abusing partitioned tables and 
> often end up in a situation where they are running queries that are doing 
> full-table scans of partitioned tables. This can lead to lots of issues for 
> Hive - e.g. OOM issues because so many partitions are loaded in memory. So it 
> would be good if we enabled this restriction by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18552) Split hive.strict.checks.large.query into two configs

2018-01-30 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18552:

Attachment: HIVE-18552.3.patch

> Split hive.strict.checks.large.query into two configs
> -
>
> Key: HIVE-18552
> URL: https://issues.apache.org/jira/browse/HIVE-18552
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18552.1.patch, HIVE-18552.2.patch, 
> HIVE-18552.3.patch
>
>
> {{hive.strict.checks.large.query}} controls the strict checks for restricting 
> order bys with no limits, and scans of a partitioned table without a filter 
> on the partition table.
> While both checks prevent "large" queries from being run, they both control 
> very different behavior. It would be better if users could control these 
> restrictions separately.
> Furthermore, many users make the mistake of abusing partitioned tables and 
> often end up in a situation where they are running queries that are doing 
> full-table scans of partitioned tables. This can lead to lots of issues for 
> Hive - e.g. OOM issues because so many partitions are loaded in memory. So it 
> would be good if we enabled this restriction by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18587) insert DML event may attempt to calculate a checksum on directories

2018-01-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18587:

Status: Patch Available  (was: Open)

Doesn't make sense to make sure every input path ensures the recursion that it 
doesn't itself need; added recursion to the event code itself.
[~ashutoshc] [~jcamachorodriguez] can you take a look? Will create an RB shortly

> insert DML event may attempt to calculate a checksum on directories
> ---
>
> Key: HIVE-18587
> URL: https://issues.apache.org/jira/browse/HIVE-18587
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18587.patch
>
>
> Looks like in union case, some code path may pass directories in newFiles. 
> Probably legacy copyData/moveData; both seem to assume that these paths are 
> files, but do not actually enforce it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18587) insert DML event may attempt to calculate a checksum on directories

2018-01-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18587:

Attachment: HIVE-18587.patch

> insert DML event may attempt to calculate a checksum on directories
> ---
>
> Key: HIVE-18587
> URL: https://issues.apache.org/jira/browse/HIVE-18587
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18587.patch
>
>
> Looks like in union case, some code path may pass directories in newFiles. 
> Probably legacy copyData/moveData; both seem to assume that these paths are 
> files, but do not actually enforce it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18587) insert DML event may attempt to calculate a checksum on directories

2018-01-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-18587:
---

Assignee: Sergey Shelukhin

> insert DML event may attempt to calculate a checksum on directories
> ---
>
> Key: HIVE-18587
> URL: https://issues.apache.org/jira/browse/HIVE-18587
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>
> Looks like in union case, some code path may pass directories in newFiles. 
> Probably legacy copyData/moveData; both seem to assume that these paths are 
> files, but do not actually enforce it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345924#comment-16345924
 ] 

Hive QA commented on HIVE-18585:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 00145ee |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8934/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18516) load data should rename files consistent with insert statements for ACID Tables

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345895#comment-16345895
 ] 

Hive QA commented on HIVE-18516:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908398/HIVE-18516.5.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 12864 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_data_acid_rename] 
(batchId=50)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[load_data_into_acid]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversion 
(batchId=259)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversionVectorized
 (batchId=259)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8933/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8933/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8933/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 24 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908398 - PreCommit-HIVE-Build

> load data should rename files consistent with insert statements for ACID 
> Tables
> ---
>
> Key: HIVE-18516
> URL: https://issues.apache.org/jira/browse/HIVE-18516
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18516.1.patch, HIVE-18516.2.patch, 
> HIVE-18516.3.patch, HIVE-18516.4.patch, HIVE-18516.5.patch
>
>
> h1. load data should rename files consistent with insert statements for ACID 
> Tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18577) SemanticAnalyzer.validate has some pointless metastore calls

2018-01-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18577:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> SemanticAnalyzer.validate has some pointless metastore calls
> 
>
> Key: HIVE-18577
> URL: https://issues.apache.org/jira/browse/HIVE-18577
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18577.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17396) Support DPP with map joins where the source and target belong in the same stage

2018-01-30 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17396:
---
Attachment: HIVE-17396.8.patch

> Support DPP with map joins where the source and target belong in the same 
> stage
> ---
>
> Key: HIVE-17396
> URL: https://issues.apache.org/jira/browse/HIVE-17396
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17396.1.patch, HIVE-17396.2.patch, 
> HIVE-17396.3.patch, HIVE-17396.4.patch, HIVE-17396.5.patch, 
> HIVE-17396.6.patch, HIVE-17396.7.patch, HIVE-17396.8.patch
>
>
> When the target of a partition pruning sink operator is in not the same as 
> the target of hash table sink operator, both source and target gets scheduled 
> within the same spark job, and that can result in File Not Found Exception.  
> HIVE-17225 has a fix to disable DPP in that scenario.  This JIRA is to 
> support DPP for such cases.
> Test Case:
> SET hive.spark.dynamic.partition.pruning=true;
> SET hive.auto.convert.join=true;
> SET hive.strict.checks.cartesian.product=false;
> CREATE TABLE part_table1 (col int) PARTITIONED BY (part1_col int);
> CREATE TABLE part_table2 (col int) PARTITIONED BY (part2_col int);
> CREATE TABLE reg_table (col int);
> ALTER TABLE part_table1 ADD PARTITION (part1_col = 1);
> ALTER TABLE part_table2 ADD PARTITION (part2_col = 1);
> ALTER TABLE part_table2 ADD PARTITION (part2_col = 2);
> INSERT INTO TABLE part_table1 PARTITION (part1_col = 1) VALUES (1);
> INSERT INTO TABLE part_table2 PARTITION (part2_col = 1) VALUES (1);
> INSERT INTO TABLE part_table2 PARTITION (part2_col = 2) VALUES (2);
> INSERT INTO table reg_table VALUES (1), (2), (3), (4), (5), (6);
> EXPLAIN SELECT *
> FROM   part_table1 pt1,
>part_table2 pt2,
>reg_table rt
> WHERE  rt.col = pt1.part1_col
> ANDpt2.part2_col = pt1.part1_col;
> Plan:
> STAGE DEPENDENCIES:
>   Stage-2 is a root stage
>   Stage-1 depends on stages: Stage-2
>   Stage-0 depends on stages: Stage-1
> STAGE PLANS:
>   Stage: Stage-2
> Spark
>  A masked pattern was here 
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: pt1
>   Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: col (type: int), part1_col (type: int)
> outputColumnNames: _col0, _col1
> Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
> Spark HashTable Sink Operator
>   keys:
> 0 _col1 (type: int)
> 1 _col1 (type: int)
> 2 _col0 (type: int)
> Select Operator
>   expressions: _col1 (type: int)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
>   Group By Operator
> keys: _col0 (type: int)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
> Spark Partition Pruning Sink Operator
>   Target column: part2_col (int)
>   partition key expr: part2_col
>   Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
>   target work: Map 2
> Local Work:
>   Map Reduce Local Work
> Map 2 
> Map Operator Tree:
> TableScan
>   alias: pt2
>   Statistics: Num rows: 2 Data size: 2 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: col (type: int), part2_col (type: int)
> outputColumnNames: _col0, _col1
> Statistics: Num rows: 2 Data size: 2 Basic stats: 
> COMPLETE Column stats: NONE
> Spark HashTable Sink Operator
>   keys:
> 0 _col1 (type: int)
> 1 _col1 (type: int)
> 2 _col0 (type: int)
> Local Work:
>   Map Reduce Local Work
>   Stage: Stage-1
> Spark
>  A masked pattern was here 
>   Vertices:
> Map 3 
> Map Operator Tree:
> TableScan
>   alias: 

[jira] [Commented] (HIVE-18281) HiveServer2 HA for LLAP and Workload Manager

2018-01-30 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345832#comment-16345832
 ] 

Prasanth Jayachandran commented on HIVE-18281:
--

[~ewohlstadter] Thanks for pointing out. That was an oversight. Fixed it.

> HiveServer2 HA for LLAP and Workload Manager
> 
>
> Key: HIVE-18281
> URL: https://issues.apache.org/jira/browse/HIVE-18281
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18281.WIP.patch, HSI-HA.pdf
>
>
> When running HS2 with LLAP and Workload Manager, HS2 becomes single point of 
> failure as some of the states for workload management and scheduling are 
> maintained in-memory. 
> The proposal is to support Active/Passive mode of high availability in which, 
> all HS2 and tez AMs registers with ZooKeeper and a leader have to be chosen 
> which will maintain stateful information. Clients using service discovery 
> will always connect to the leader for submitting queries. There will also be 
> some responsibilities for the leader, failover handling, tez session 
> reconnect etc. Will upload some more detailed information in a separate doc. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18281) HiveServer2 HA for LLAP and Workload Manager

2018-01-30 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-18281:
-
Issue Type: New Feature  (was: Bug)

> HiveServer2 HA for LLAP and Workload Manager
> 
>
> Key: HIVE-18281
> URL: https://issues.apache.org/jira/browse/HIVE-18281
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18281.WIP.patch, HSI-HA.pdf
>
>
> When running HS2 with LLAP and Workload Manager, HS2 becomes single point of 
> failure as some of the states for workload management and scheduling are 
> maintained in-memory. 
> The proposal is to support Active/Passive mode of high availability in which, 
> all HS2 and tez AMs registers with ZooKeeper and a leader have to be chosen 
> which will maintain stateful information. Clients using service discovery 
> will always connect to the leader for submitting queries. There will also be 
> some responsibilities for the leader, failover handling, tez session 
> reconnect etc. Will upload some more detailed information in a separate doc. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-30 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345824#comment-16345824
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 06.patch with below changes.

1. Support ValidWriteIdList config string with multiple tables. Need to change 
the format to include table name and separator as 
follows.:hwm:minWriteId:open1,open2:abort1,abort2$:hwm...

2. Pass appropriate ValidWriteIdList string to ORCInputFormat without need to 
get table name there. (As per Gopal’s suggestion)

 

Pending changes for this feature
 # Cleaner for TXNS_TO_WRITE_ID table entries. Also, need to maintain the LWM 
for each table write id.
 # Update classes to use WriteID instead of TxnId as methods and variable names.
 # Scripts to add Metastore tables for write id management. Add for other 
non-derby databases too.
 # Remove entries from TXNS_TO_WRITE_ID table when drop a table/database. Also, 
split db and table names into 2 columns.
 # CompactionTxnHandler cleanup COMPLETED_TXN_COMPONENTS based on highest write 
id instead of highest txn id.
 # Non-Acid to Acid conversion through alter table transaction property.

 

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-30 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18192:

Status: Patch Available  (was: Open)

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18516) load data should rename files consistent with insert statements for ACID Tables

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345822#comment-16345822
 ] 

Hive QA commented on HIVE-18516:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
36s{color} | {color:red} ql: The patch generated 3 new + 245 unchanged - 4 
fixed = 248 total (was 249) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 9a9f7de |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8933/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8933/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> load data should rename files consistent with insert statements for ACID 
> Tables
> ---
>
> Key: HIVE-18516
> URL: https://issues.apache.org/jira/browse/HIVE-18516
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18516.1.patch, HIVE-18516.2.patch, 
> HIVE-18516.3.patch, HIVE-18516.4.patch, HIVE-18516.5.patch
>
>
> h1. load data should rename files consistent with insert statements for ACID 
> Tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-30 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18192:

Attachment: HIVE-18192.06.patch

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-30 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18192:

Status: Open  (was: Patch Available)

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345812#comment-16345812
 ] 

Ashutosh Chauhan commented on HIVE-18585:
-

{code}

2018-01-29T23:22:43,564 ERROR [46b8c1ed-6d26-449b-a7a1-a7d6fe0b9afe 
HiveServer2-Handler-Pool: Thread-91]: parse.CalcitePlanner (:()) - CBO failed, 
skipping CBO. 
java.lang.NullPointerException: null
 at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:212) 
~[guava-19.0.jar:?]
 at org.apache.calcite.rex.RexCall.(RexCall.java:59) 
~[calcite-core-1.15.0.jar:1.15.0]
 at org.apache.calcite.rex.RexBuilder.makeCall(RexBuilder.java:249) 
~[calcite-core-1.15.0.jar:1.15.0]
 at 
org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.convert(RexNodeConverter.java:340)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.convert(RexNodeConverter.java:173)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.convert(RexNodeConverter.java:316)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.convert(RexNodeConverter.java:173)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.convert(RexNodeConverter.java:316)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.convert(RexNodeConverter.java:173)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genGBRelNode(CalcitePlanner.java:3032)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genGBLogicalPlan(CalcitePlanner.java:3384)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:4508)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1433)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1380)
 ~[hive-exec-3.0.0.3.0.0.0-776.jar:3.0.0.3.0.0.0-776]
 at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) 
~[calcite-core-1.15.0.jar:1.15.0]
 at 
org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1052)
 ~[calcite-core-1.15.0.jar:1.15.0]
 at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154) 
~[calcite-core-1.15.0.jar:1.15.0]
 at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111) 
~[calcite-core-1.15.0.jar:1.15.0]
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1185)

{code}

> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18577) SemanticAnalyzer.validate has some pointless metastore calls

2018-01-30 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345803#comment-16345803
 ] 

Sergey Shelukhin commented on HIVE-18577:
-

Failures that are not timeouts are definitely unrelated. I'll try to rerun the 
timed out test locally, logs seem to only have some move errors on localhost so 
probably a test env issue.

> SemanticAnalyzer.validate has some pointless metastore calls
> 
>
> Key: HIVE-18577
> URL: https://issues.apache.org/jira/browse/HIVE-18577
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18577.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18569) Hive Druid indexing not dealing with decimals in correct way.

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345789#comment-16345789
 ] 

Hive QA commented on HIVE-18569:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908366/HIVE-18569.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 12862 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testGetMetastoreUuid 
(batchId=208)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8932/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8932/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8932/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 21 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908366 - PreCommit-HIVE-Build

> Hive Druid indexing not dealing with decimals in correct way.
> -
>
> Key: HIVE-18569
> URL: https://issues.apache.org/jira/browse/HIVE-18569
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18569.1.patch, HIVE-18569.patch
>
>
> Currently, a decimal column is indexed as double in druid.
> This should not happen and either the user has to add an explicit cast or we 
> can add a flag to enable approximation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-4243) Fix column names in FileSinkOperator

2018-01-30 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345790#comment-16345790
 ] 

Sergey Shelukhin commented on HIVE-4243:


[~owen.omalley] is there a reason proto on branch-2/master says "only in Hive 
2"? 
I.e. is there a reason why this cannot be backported to branch-1 if needed?

> Fix column names in FileSinkOperator
> 
>
> Key: HIVE-4243
> URL: https://issues.apache.org/jira/browse/HIVE-4243
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HIVE-4243.patch, HIVE-4243.patch, HIVE-4243.patch, 
> HIVE-4243.patch, HIVE-4243.patch, HIVE-4243.patch, HIVE-4243.tmp.patch
>
>
> All of the ObjectInspectors given to SerDe's by FileSinkOperator have virtual 
> column names. Since the files are part of tables, Hive knows the column 
> names. For self-describing file formats like ORC, having the real column 
> names will improve the understandability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18586) Upgrade Derby to 10.14.1.0

2018-01-30 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-18586:
---
Attachment: HIVE-18586.1.patch

> Upgrade Derby to 10.14.1.0
> --
>
> Key: HIVE-18586
> URL: https://issues.apache.org/jira/browse/HIVE-18586
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-18586.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18586) Upgrade Derby to 10.14.1.0

2018-01-30 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-18586:
---
Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

> Upgrade Derby to 10.14.1.0
> --
>
> Key: HIVE-18586
> URL: https://issues.apache.org/jira/browse/HIVE-18586
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-18586.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18281) HiveServer2 HA for LLAP and Workload Manager

2018-01-30 Thread Eric Wohlstadter (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345753#comment-16345753
 ] 

Eric Wohlstadter commented on HIVE-18281:
-

[~prasanth_j]

Minor:

 Why is this a Bug? It seems like a Feature to me.

> HiveServer2 HA for LLAP and Workload Manager
> 
>
> Key: HIVE-18281
> URL: https://issues.apache.org/jira/browse/HIVE-18281
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18281.WIP.patch, HSI-HA.pdf
>
>
> When running HS2 with LLAP and Workload Manager, HS2 becomes single point of 
> failure as some of the states for workload management and scheduling are 
> maintained in-memory. 
> The proposal is to support Active/Passive mode of high availability in which, 
> all HS2 and tez AMs registers with ZooKeeper and a leader have to be chosen 
> which will maintain stateful information. Clients using service discovery 
> will always connect to the leader for submitting queries. There will also be 
> some responsibilities for the leader, failover handling, tez session 
> reconnect etc. Will upload some more detailed information in a separate doc. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18582) MSCK REPAIR TABLE Throw MetaException

2018-01-30 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345757#comment-16345757
 ] 

Sergey Shelukhin commented on HIVE-18582:
-

Throwing for a missing partition directory in throw mode is by design.
However, the change itself makes sense to me, vals should not really survive 
between partitions so the scope should be reduced.
Patches welcome ;)

>  MSCK REPAIR TABLE Throw MetaException
> --
>
> Key: HIVE-18582
> URL: https://issues.apache.org/jira/browse/HIVE-18582
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.1.1
>Reporter: liubangchen
>Priority: Major
>
> while executing query MSCK REPAIR TABLE tablename I got Exception:
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1847)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:402)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> --
> Caused by: MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at 
> org.apache.hadoop.hive.metastore.Warehouse.makeValsFromName(Warehouse.java:385)
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1845)
> {code}
> table PARTITIONED by (log_date,vgameid)
> The data file on HDFS is:
>  
> {code:java}
> /usr/hive/warehouse/a.db/tablename/log_date=2015063023
> drwxr-xr-x - root supergroup 0 2018-01-26 09:41 
> /usr/hive/warehouse/a.db/tablename/log_date=2015121309/vgameid=lyjt
> {code}
> The subdir of log_data=2015063023 is empty
> If i set  hive.msck.path.validation=ignore Then msck repair table will 
> executed ok.
> Then I found code like this:
> {code:java}
> private int msck(Hive db, MsckDesc msckDesc) {
>   CheckResult result = new CheckResult();
>   List repairOutput = new ArrayList();
>   try {
> HiveMetaStoreChecker checker = new HiveMetaStoreChecker(db);
> String[] names = Utilities.getDbTableName(msckDesc.getTableName());
> checker.checkMetastore(names[0], names[1], msckDesc.getPartSpecs(), 
> result);
> List partsNotInMs = 
> result.getPartitionsNotInMs();
> if (msckDesc.isRepairPartitions() && !partsNotInMs.isEmpty()) {
>  //I think bug is here
>   AbstractList vals = null;
>   String settingStr = HiveConf.getVar(conf, 
> HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION);
>   boolean doValidate = !("ignore".equals(settingStr));
>   boolean doSkip = doValidate && "skip".equals(settingStr);
>   // The default setting is "throw"; assume doValidate && !doSkip means 
> throw.
>   if (doValidate) {
> // Validate that we can add partition without escaping. Escaping was 
> originally intended
> // to avoid creating invalid HDFS paths; however, if we escape the 
> HDFS path (that we
> // deem invalid but HDFS actually supports - it is possible to create 
> HDFS paths with
> // unprintable characters like ASCII 7), metastore will create 
> another directory instead
> // of the one we are trying to "repair" here.
> Iterator iter = partsNotInMs.iterator();
> while (iter.hasNext()) {
>   CheckResult.PartitionResult part = iter.next();
>   try {
> vals = Warehouse.makeValsFromName(part.getPartitionName(), vals);
>   } catch (MetaException ex) {
> throw new HiveException(ex);
>   }
>   for (String val : vals) {
> String escapedPath = FileUtils.escapePathName(val);
> assert escapedPath != null;
> if (escapedPath.equals(val)) continue;
> String errorMsg = "Repair: Cannot add partition " + 
> msckDesc.getTableName()
> + ':' + part.getPartitionName() + " due to invalid characters 
> in the name";
> if (doSkip) {
>   repairOutput.add(errorMsg);
>   iter.remove();
> } else {
>   throw new HiveException(errorMsg);
> }
>   }
> }
>   }
> {code}
> I think  AbstractList vals = null; must placed after  "while 
> (iter.hasNext()) {" will work ok.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18569) Hive Druid indexing not dealing with decimals in correct way.

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345707#comment-16345707
 ] 

Hive QA commented on HIVE-18569:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} common: The patch generated 1 new + 424 unchanged - 0 
fixed = 425 total (was 424) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} druid-handler: The patch generated 8 new + 36 
unchanged - 1 fixed = 44 total (was 37) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 9a9f7de |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8932/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8932/yetus/diff-checkstyle-druid-handler.txt
 |
| modules | C: common ql druid-handler U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8932/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive Druid indexing not dealing with decimals in correct way.
> -
>
> Key: HIVE-18569
> URL: https://issues.apache.org/jira/browse/HIVE-18569
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18569.1.patch, HIVE-18569.patch
>
>
> Currently, a decimal column is indexed as double in druid.
> This should not happen and either the user has to add an explicit cast or we 
> can add a flag to enable approximation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18586) Upgrade Derby to 10.14.1.0

2018-01-30 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani reassigned HIVE-18586:
--


> Upgrade Derby to 10.14.1.0
> --
>
> Key: HIVE-18586
> URL: https://issues.apache.org/jira/browse/HIVE-18586
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18542) Create tests to cover getTableMeta method

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345663#comment-16345663
 ] 

Hive QA commented on HIVE-18542:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908372/HIVE-18542.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 12851 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
 (batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testGetMetastoreUuid 
(batchId=208)
org.apache.hadoop.hive.metastore.client.TestGetListIndexes.testGetIndexEmptyTableName[Embedded]
 (batchId=206)
org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded]
 (batchId=206)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.TestBeeLineWithArgs.testEscapeCRLFInTSV2Output 
(batchId=231)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testTupleInBagInTupleInBag[1]
 (batchId=193)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8931/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8931/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8931/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908372 - PreCommit-HIVE-Build

> Create tests to cover getTableMeta method
> -
>
> Key: HIVE-18542
> URL: https://issues.apache.org/jira/browse/HIVE-18542
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-18542.0.patch, HIVE-18542.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17983) Make the standalone metastore generate tarballs etc.

2018-01-30 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-17983:
--
Attachment: HIVE-17983.3.patch

> Make the standalone metastore generate tarballs etc.
> 
>
> Key: HIVE-17983
> URL: https://issues.apache.org/jira/browse/HIVE-17983
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17983.2.patch, HIVE-17983.3.patch, HIVE-17983.patch
>
>
> In order to be separately installable the standalone metastore needs its own 
> tarballs, startup scripts, etc.  All of the SQL installation and upgrade 
> scripts also need to move from metastore to standalone-metastore.
> I also plan to create Dockerfiles for different database types so that 
> developers can test the SQL installation and upgrade scripts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18585:

Status: Patch Available  (was: Open)

> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18585:

Attachment: HIVE-18585.patch

> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18585.patch
>
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18585) Return type for udfs should be determined using Hive inference rules instead of Calcite

2018-01-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-18585:
---


> Return type for udfs should be determined using Hive inference rules instead 
> of Calcite
> ---
>
> Key: HIVE-18585
> URL: https://issues.apache.org/jira/browse/HIVE-18585
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
>
> e.g., Calcite considers date and varchar incompatible types in case system, 
> while Hive doesn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18516) load data should rename files consistent with insert statements for ACID Tables

2018-01-30 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18516:
--
Attachment: HIVE-18516.5.patch

> load data should rename files consistent with insert statements for ACID 
> Tables
> ---
>
> Key: HIVE-18516
> URL: https://issues.apache.org/jira/browse/HIVE-18516
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18516.1.patch, HIVE-18516.2.patch, 
> HIVE-18516.3.patch, HIVE-18516.4.patch, HIVE-18516.5.patch
>
>
> h1. load data should rename files consistent with insert statements for ACID 
> Tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18542) Create tests to cover getTableMeta method

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345587#comment-16345587
 ] 

Hive QA commented on HIVE-18542:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 9a9f7de |
| Default Java | 1.8.0_111 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8931/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create tests to cover getTableMeta method
> -
>
> Key: HIVE-18542
> URL: https://issues.apache.org/jira/browse/HIVE-18542
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-18542.0.patch, HIVE-18542.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345556#comment-16345556
 ] 

Hive QA commented on HIVE-18449:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908374/HIVE-18449.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 12862 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.TestAcidTableSetup.testTransactionalValidation 
(batchId=223)
org.apache.hadoop.hive.metastore.client.TestDropPartitions.testDropPartition[Embedded]
 (batchId=206)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.createTable (batchId=293)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8930/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8930/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8930/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 26 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908374 - PreCommit-HIVE-Build

> Add configurable policy for choosing the HMS URI from hive.metastore.uris
> -
>
> Key: HIVE-18449
> URL: https://issues.apache.org/jira/browse/HIVE-18449
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Sahil Takiar
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-18449.1.patch, HIVE-18449.2.patch
>
>
> HIVE-10815 added logic to randomly choose a HMS URI from 
> {{hive.metastore.uris}}. It would be nice if there was a configurable policy 
> that determined how a URI is chosen from this list - e.g. one option can be 
> to randomly pick a URI, another option can be to choose the first URI in the 
> list (which was the behavior prior to HIVE-10815).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-17626) Query reoptimization using cached runtime statistics

2018-01-30 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-17626:
---

Assignee: Zoltan Haindrich

> Query reoptimization using cached runtime statistics
> 
>
> Key: HIVE-17626
> URL: https://issues.apache.org/jira/browse/HIVE-17626
> Project: Hive
>  Issue Type: New Feature
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: runtimestats.patch
>
>
> Something similar to "EXPLAIN ANALYZE" where we annotate explain plan with 
> actual and estimated statistics. The runtime stats can be cached at query 
> level and subsequent execution of the same query can make use of the cached 
> statistics from the previous run for better optimization. 
> Some use cases,
> 1) re-planning join query (mapjoin failures can be converted to shuffle joins)
> 2) better statistics for table scan operator if dynamic partition pruning is 
> involved
> 3) Better estimates for bloom filter initialization (setting expected entries 
> during merge)
> This can extended to support wider queries by caching fragments of operator 
> plans scanning same table(s) or matching some operator sequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris

2018-01-30 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345494#comment-16345494
 ] 

Vihang Karajgaonkar commented on HIVE-18449:


+1 LGTM

> Add configurable policy for choosing the HMS URI from hive.metastore.uris
> -
>
> Key: HIVE-18449
> URL: https://issues.apache.org/jira/browse/HIVE-18449
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Sahil Takiar
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-18449.1.patch, HIVE-18449.2.patch
>
>
> HIVE-10815 added logic to randomly choose a HMS URI from 
> {{hive.metastore.uris}}. It would be nice if there was a configurable policy 
> that determined how a URI is chosen from this list - e.g. one option can be 
> to randomly pick a URI, another option can be to choose the first URI in the 
> list (which was the behavior prior to HIVE-10815).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18582) MSCK REPAIR TABLE Throw MetaException

2018-01-30 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345491#comment-16345491
 ] 

Vihang Karajgaonkar commented on HIVE-18582:


This is interesting. It looks like when we set {{hive.msck.path.validation}} to 
{{skip}} or {{throw}} msck will throw an exception when there are empty 
partition directories. Moving "AbstractList vals = null" after "while 
(iter.hasNext())" as suggested above may not help since if the vals is set to 
null {{Warehouse.makeValsFromName}} will initialize it.

This behavior may be but design although I am not a 100% sure. [~sershe] Do you 
know if this an intended behavior or a bug? Based on the description of the 
config it looks like it should check only for "invalid" characters in the 
partition names. But looks like it is throwing for empty partitions as well.

>  MSCK REPAIR TABLE Throw MetaException
> --
>
> Key: HIVE-18582
> URL: https://issues.apache.org/jira/browse/HIVE-18582
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.1.1
>Reporter: liubangchen
>Priority: Major
>
> while executing query MSCK REPAIR TABLE tablename I got Exception:
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1847)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:402)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> --
> Caused by: MetaException(message:Expected 1 components, got 2 
> (log_date=2015121309/vgameid=lyjt))
> at 
> org.apache.hadoop.hive.metastore.Warehouse.makeValsFromName(Warehouse.java:385)
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1845)
> {code}
> table PARTITIONED by (log_date,vgameid)
> The data file on HDFS is:
>  
> {code:java}
> /usr/hive/warehouse/a.db/tablename/log_date=2015063023
> drwxr-xr-x - root supergroup 0 2018-01-26 09:41 
> /usr/hive/warehouse/a.db/tablename/log_date=2015121309/vgameid=lyjt
> {code}
> The subdir of log_data=2015063023 is empty
> If i set  hive.msck.path.validation=ignore Then msck repair table will 
> executed ok.
> Then I found code like this:
> {code:java}
> private int msck(Hive db, MsckDesc msckDesc) {
>   CheckResult result = new CheckResult();
>   List repairOutput = new ArrayList();
>   try {
> HiveMetaStoreChecker checker = new HiveMetaStoreChecker(db);
> String[] names = Utilities.getDbTableName(msckDesc.getTableName());
> checker.checkMetastore(names[0], names[1], msckDesc.getPartSpecs(), 
> result);
> List partsNotInMs = 
> result.getPartitionsNotInMs();
> if (msckDesc.isRepairPartitions() && !partsNotInMs.isEmpty()) {
>  //I think bug is here
>   AbstractList vals = null;
>   String settingStr = HiveConf.getVar(conf, 
> HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION);
>   boolean doValidate = !("ignore".equals(settingStr));
>   boolean doSkip = doValidate && "skip".equals(settingStr);
>   // The default setting is "throw"; assume doValidate && !doSkip means 
> throw.
>   if (doValidate) {
> // Validate that we can add partition without escaping. Escaping was 
> originally intended
> // to avoid creating invalid HDFS paths; however, if we escape the 
> HDFS path (that we
> // deem invalid but HDFS actually supports - it is possible to create 
> HDFS paths with
> // unprintable characters like ASCII 7), metastore will create 
> another directory instead
> // of the one we are trying to "repair" here.
> Iterator iter = partsNotInMs.iterator();
> while (iter.hasNext()) {
>   CheckResult.PartitionResult part = iter.next();
>   try {
> vals = Warehouse.makeValsFromName(part.getPartitionName(), vals);
>   } catch (MetaException ex) {
> throw new HiveException(ex);
>   }
>   for (String val : vals) {
> String escapedPath = FileUtils.escapePathName(val);
> assert escapedPath != null;
> if (escapedPath.equals(val)) continue;
> String errorMsg = "Repair: Cannot add partition " + 
> msckDesc.getTableName()
> + ':' + part.getPartitionName() + " due to 

[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345471#comment-16345471
 ] 

Hive QA commented on HIVE-18449:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 9a9f7de |
| Default Java | 1.8.0_111 |
| modules | C: common standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8930/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add configurable policy for choosing the HMS URI from hive.metastore.uris
> -
>
> Key: HIVE-18449
> URL: https://issues.apache.org/jira/browse/HIVE-18449
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Sahil Takiar
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-18449.1.patch, HIVE-18449.2.patch
>
>
> HIVE-10815 added logic to randomly choose a HMS URI from 
> {{hive.metastore.uris}}. It would be nice if there was a configurable policy 
> that determined how a URI is chosen from this list - e.g. one option can be 
> to randomly pick a URI, another option can be to choose the first URI in the 
> list (which was the behavior prior to HIVE-10815).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >