[jira] [Updated] (HIVE-24650) hiveserver2 memory usage is extremely high, GC unable to recycle

2021-01-17 Thread zhaojk (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaojk updated HIVE-24650:
--
Description: 
HDP's HiveServer2 is using 80GB of memory (HEAP is configured with 74GB), and 
when the memory is full, there will be frequent Full GC, and then the memory 
cannot be recycled, resulting in a service exception.Analyze memory usage.

GC config:

export HADOOP_OPTS="$HADOOP_OPTS 
-Xloggc:\{{hive_log_dir}}/hiveserver2-gc-%t.log -XX:ConcGCThreads=30 
-XX:ParallelGCThreads=30 -XX:+UseG1GC -XX:G1HeapRegionSize=8M 
-XX:+UseStringDeduplication -XX:MaxGCPauseMillis=1000 
-XX:InitiatingHeapOccupancyPercent=40 -XX:G1ReservePercent=15 
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCCause 
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/hive/hs2_heapdump.hprof 
-Dhive.log.dir=\{{hive_log_dir}} -Dhive.log.file=hiveserver2.log

The details at

[https://blog.csdn.net/Small_codeing/article/details/112601226]

  was:
HDP's HiveServer2 is using 80GB of memory (HEAP is configured with 74GB), and 
when the memory is full, there will be frequent Full GC, and then the memory 
cannot be recycled, resulting in a service exception.Analyze memory usage.The 
details at

https://blog.csdn.net/Small_codeing/article/details/112601226


> hiveserver2 memory usage is extremely high, GC unable to recycle
> 
>
> Key: HIVE-24650
> URL: https://issues.apache.org/jira/browse/HIVE-24650
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
> Environment: hive3.1.0
>Reporter: zhaojk
>Priority: Major
> Attachments: 1.png, 2.png, 3.png
>
>
> HDP's HiveServer2 is using 80GB of memory (HEAP is configured with 74GB), and 
> when the memory is full, there will be frequent Full GC, and then the memory 
> cannot be recycled, resulting in a service exception.Analyze memory usage.
> GC config:
> export HADOOP_OPTS="$HADOOP_OPTS 
> -Xloggc:\{{hive_log_dir}}/hiveserver2-gc-%t.log -XX:ConcGCThreads=30 
> -XX:ParallelGCThreads=30 -XX:+UseG1GC -XX:G1HeapRegionSize=8M 
> -XX:+UseStringDeduplication -XX:MaxGCPauseMillis=1000 
> -XX:InitiatingHeapOccupancyPercent=40 -XX:G1ReservePercent=15 
> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCCause 
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M 
> -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/home/hive/hs2_heapdump.hprof 
> -Dhive.log.dir=\{{hive_log_dir}} -Dhive.log.file=hiveserver2.log
> The details at
> [https://blog.csdn.net/Small_codeing/article/details/112601226]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-24486) Enhance operator merge logic to also consider going thru RS operators

2021-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24486?focusedWorklogId=537239=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-537239
 ]

ASF GitHub Bot logged work on HIVE-24486:
-

Author: ASF GitHub Bot
Created on: 18/Jan/21 06:14
Start Date: 18/Jan/21 06:14
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on a change in pull request #1840:
URL: https://github.com/apache/hive/pull/1840#discussion_r557526970



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/ParallelEdgeFixer.java
##
@@ -0,0 +1,216 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.optimizer;
+
+import java.util.ArrayList;
+import java.util.Comparator;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map.Entry;
+import org.apache.calcite.util.Pair;
+import org.apache.commons.collections4.ListValuedMap;
+import org.apache.commons.collections4.multimap.ArrayListValuedHashMap;
+import org.apache.hadoop.hive.ql.exec.ColumnInfo;
+import org.apache.hadoop.hive.ql.exec.MapJoinOperator;
+import org.apache.hadoop.hive.ql.exec.Operator;
+import org.apache.hadoop.hive.ql.exec.OperatorFactory;
+import org.apache.hadoop.hive.ql.exec.RowSchema;
+import org.apache.hadoop.hive.ql.exec.TableScanOperator;
+import org.apache.hadoop.hive.ql.optimizer.graph.OperatorGraph;
+import org.apache.hadoop.hive.ql.optimizer.graph.OperatorGraph.Cluster;
+import org.apache.hadoop.hive.ql.parse.ParseContext;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+import org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc;
+import org.apache.hadoop.hive.ql.plan.ExprNodeConstantDesc;
+import org.apache.hadoop.hive.ql.plan.ExprNodeDesc;
+import org.apache.hadoop.hive.ql.plan.OperatorDesc;
+import org.apache.hadoop.hive.ql.plan.ReduceSinkDesc;
+import org.apache.hadoop.hive.ql.plan.SelectDesc;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.Lists;
+
+/**
+ * Inserts an extrea RS to avoid parallel edges.

Review comment:
   typo. `extrea` -> `extra`

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/ParallelEdgeFixer.java
##
@@ -0,0 +1,216 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.optimizer;
+
+import java.util.ArrayList;
+import java.util.Comparator;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map.Entry;
+import org.apache.calcite.util.Pair;
+import org.apache.commons.collections4.ListValuedMap;
+import org.apache.commons.collections4.multimap.ArrayListValuedHashMap;
+import org.apache.hadoop.hive.ql.exec.ColumnInfo;
+import org.apache.hadoop.hive.ql.exec.MapJoinOperator;
+import org.apache.hadoop.hive.ql.exec.Operator;
+import org.apache.hadoop.hive.ql.exec.OperatorFactory;
+import org.apache.hadoop.hive.ql.exec.RowSchema;
+import org.apache.hadoop.hive.ql.exec.TableScanOperator;
+import org.apache.hadoop.hive.ql.optimizer.graph.OperatorGraph;
+import org.apache.hadoop.hive.ql.optimizer.graph.OperatorGraph.Cluster;
+import org.apache.hadoop.hive.ql.parse.ParseContext;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+import org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc;
+import org.apache.hadoop.hive.ql.plan.ExprNodeConstantDesc;
+import 

[jira] [Updated] (HIVE-24624) Repl Load should detect the compatible staging dir

2021-01-17 Thread Pratyushotpal Madhukar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pratyushotpal Madhukar updated HIVE-24624:
--
Attachment: HIVE-24624.patch

> Repl Load should detect the compatible staging dir
> --
>
> Key: HIVE-24624
> URL: https://issues.apache.org/jira/browse/HIVE-24624
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pratyushotpal Madhukar
>Assignee: Pratyushotpal Madhukar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-24624.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Repl load in CDP when pointed to a staging dir should be able to detect 
> whether the staging dir has the dump structure in compatible format or not



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-24624) Repl Load should detect the compatible staging dir

2021-01-17 Thread Pratyushotpal Madhukar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pratyushotpal Madhukar updated HIVE-24624:
--
Attachment: (was: HIVE-24624.patch)

> Repl Load should detect the compatible staging dir
> --
>
> Key: HIVE-24624
> URL: https://issues.apache.org/jira/browse/HIVE-24624
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pratyushotpal Madhukar
>Assignee: Pratyushotpal Madhukar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-24624.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Repl load in CDP when pointed to a staging dir should be able to detect 
> whether the staging dir has the dump structure in compatible format or not



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24649) Optimise Hive::addWriteNotificationLog for large data inserts

2021-01-17 Thread Anishek Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17267006#comment-17267006
 ] 

Anishek Agarwal commented on HIVE-24649:


the partitions add additional data like catalog name etc for now in HMS call 
and not present on HS2 side , may be other things internally be added later ( 
which is difficult to predict ) ,  at most i think we can probably prevent 
reloading of the table object, this is also across the HS2 and HMS boundary, 
better would be if caching of metadata is enabled on HMS that way round trip to 
rdbms would be small. another possible way is return list of partitions from 
{{addPartitionsToMetastore}} which is lot of network roundtrip to and from HMS 
to HS2 and back to HMS in addWriteNotificationLog.

cc [~aasha]/[~pkumarsinha]

> Optimise Hive::addWriteNotificationLog for large data inserts
> -
>
> Key: HIVE-24649
> URL: https://issues.apache.org/jira/browse/HIVE-24649
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Rajesh Balamohan
>Priority: Major
>  Labels: performance
>
> When loading dynamic partition with large dataset, it spends lot of time in 
> "Hive::loadDynamicPartitions --> addWriteNotificationLog".
> Though it is for same for same table, it ends up loading table and partition 
> details for every partition and writes to notification log.
> Also, "Partition" details may be already present in {{PartitionDetails}} 
> object in {{Hive::loadDynamicPartitions}}. This is unnecessarily recomputed 
> again in {{HiveMetaStore::add_write_notification_log}}
>  
> Lines of interest:
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L3028
> https://github.com/apache/hive/blob/89073a94354f0cc14ec4ae0a43e05aae29276b4d/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L8500
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-24313) Optimise stats collection for file sizes on cloud storage

2021-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24313?focusedWorklogId=537200=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-537200
 ]

ASF GitHub Bot logged work on HIVE-24313:
-

Author: ASF GitHub Bot
Created on: 18/Jan/21 01:30
Start Date: 18/Jan/21 01:30
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on pull request #1636:
URL: https://github.com/apache/hive/pull/1636#issuecomment-761924352


   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 537200)
Time Spent: 40m  (was: 0.5h)

> Optimise stats collection for file sizes on cloud storage
> -
>
> Key: HIVE-24313
> URL: https://issues.apache.org/jira/browse/HIVE-24313
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Rajesh Balamohan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When stats information is not present (e.g external table), RelOptHiveTable 
> computes basic stats at runtime.
> Following is the codepath.
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/RelOptHiveTable.java#L598]
> {code:java}
> Statistics stats = StatsUtils.collectStatistics(hiveConf, partitionList,
> hiveTblMetadata, hiveNonPartitionCols, 
> nonPartColNamesThatRqrStats, colStatsCached,
> nonPartColNamesThatRqrStats, true);
>  {code}
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java#L322]
> {code:java}
> for (Partition p : partList.getNotDeniedPartns()) {
> BasicStats basicStats = 
> basicStatsFactory.build(Partish.buildFor(table, p));
> partStats.add(basicStats);
>   }
>  {code}
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/stats/BasicStats.java#L205]
>  
> {code:java}
> try {
> ds = getFileSizeForPath(path);
>   } catch (IOException e) {
> ds = 0L;
>   }
>  {code}
>  
> For a table & query with large number of partitions, this takes long time to 
> compute statistics and increases compilation time.  It would be good to fix 
> it with "ForkJoinPool" ( 
> partList.getNotDeniedPartns().parallelStream().forEach((p) )
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-24366) changeMarker value sent to atlas export API is set to 0 in the 2nd repl dump call

2021-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24366?focusedWorklogId=537201=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-537201
 ]

ASF GitHub Bot logged work on HIVE-24366:
-

Author: ASF GitHub Bot
Created on: 18/Jan/21 01:30
Start Date: 18/Jan/21 01:30
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] closed pull request #1659:
URL: https://github.com/apache/hive/pull/1659


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 537201)
Time Spent: 0.5h  (was: 20m)

> changeMarker value sent to atlas export API is set to 0 in the 2nd repl dump 
> call
> -
>
> Key: HIVE-24366
> URL: https://issues.apache.org/jira/browse/HIVE-24366
> Project: Hive
>  Issue Type: Bug
>Reporter: Arko Sharma
>Assignee: Arko Sharma
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-24366.01.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22550) Result of hive.query.string in job xml contains encoded string

2021-01-17 Thread zhangbutao (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266950#comment-17266950
 ] 

zhangbutao commented on HIVE-22550:
---

Hi, [~ayushtkn], finally, i revert some encoding code of HIVE-11483、HIVE-18166 
to resolve the problem.

> Result of hive.query.string in job xml contains  encoded string
> ---
>
> Key: HIVE-22550
> URL: https://issues.apache.org/jira/browse/HIVE-22550
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-branch-3.1.patch, job xml.JPG
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> repo:
> Query :  *insert into test values(1)*
> job xml will be display *insert+into+test+values%281%29*
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22550) Result of hive.query.string in job xml contains encoded string

2021-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22550:
--
Labels: pull-request-available  (was: )

> Result of hive.query.string in job xml contains  encoded string
> ---
>
> Key: HIVE-22550
> URL: https://issues.apache.org/jira/browse/HIVE-22550
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-branch-3.1.patch, job xml.JPG
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> repo:
> Query :  *insert into test values(1)*
> job xml will be display *insert+into+test+values%281%29*
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22550) Result of hive.query.string in job xml contains encoded string

2021-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22550?focusedWorklogId=537153=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-537153
 ]

ASF GitHub Bot logged work on HIVE-22550:
-

Author: ASF GitHub Bot
Created on: 17/Jan/21 20:24
Start Date: 17/Jan/21 20:24
Worklog Time Spent: 10m 
  Work Description: ayushtkn opened a new pull request #1879:
URL: https://github.com/apache/hive/pull/1879


   https://issues.apache.org/jira/browse/HIVE-22550



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 537153)
Remaining Estimate: 0h
Time Spent: 10m

> Result of hive.query.string in job xml contains  encoded string
> ---
>
> Key: HIVE-22550
> URL: https://issues.apache.org/jira/browse/HIVE-22550
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
> Attachments: HIVE-branch-3.1.patch, job xml.JPG
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> repo:
> Query :  *insert into test values(1)*
> job xml will be display *insert+into+test+values%281%29*
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-24633) Support CTE with column labels

2021-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24633?focusedWorklogId=537094=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-537094
 ]

ASF GitHub Bot logged work on HIVE-24633:
-

Author: ASF GitHub Bot
Created on: 17/Jan/21 16:11
Start Date: 17/Jan/21 16:11
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on a change in pull request #1865:
URL: https://github.com/apache/hive/pull/1865#discussion_r559202460



##
File path: ql/src/test/queries/clientpositive/cte_8.q
##
@@ -0,0 +1,34 @@
+set hive.cli.print.header=true;
+
+create table t1(int_col int, bigint_col bigint);
+
+insert into t1 values(1, 2), (3, 4);
+
+explain cbo
+with cte1(a, b) as (select int_col x, bigint_col y from t1)
+select a, b from cte1;
+
+with cte1(a, b) as (select int_col x, bigint_col y from t1)
+select a, b from cte1;
+
+with cte1(a) as (select int_col x, bigint_col y from t1)

Review comment:
   What happens for the following query?
   ```
   with cte1(a) as (select int_col x, bigint_col a from t1)
   ...
   ```
   Basically the `a` alias will clash. What is our behavior compared to other 
RDBMS that allows this such as PG? Can we add the test?
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 537094)
Time Spent: 20m  (was: 10m)

> Support CTE with column labels
> --
>
> Key: HIVE-24633
> URL: https://issues.apache.org/jira/browse/HIVE-24633
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> with cte1(a, b) as (select int_col x, bigint_col y from t1)
> select a, b from cte1{code}
> {code}
> a b
> 1 2
> 3 4
> {code}
> {code}
>  ::=
>   [  ] 
>   [  ] [  ] [  ]
>  ::=
>   WITH [ RECURSIVE ] 
>  ::=
>[ {   }... ]
>  ::=
>[]
>   AS  [  ]
>  ::=
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-24470) Separate HiveMetastore Thrift and Driver logic

2021-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24470?focusedWorklogId=537082=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-537082
 ]

ASF GitHub Bot logged work on HIVE-24470:
-

Author: ASF GitHub Bot
Created on: 17/Jan/21 14:49
Start Date: 17/Jan/21 14:49
Worklog Time Spent: 10m 
  Work Description: dataproc-metastore commented on pull request #1787:
URL: https://github.com/apache/hive/pull/1787#issuecomment-761824300


   @vihangk1 WDYT now? If you have any suggestions regarding comments on 
`HMSHandler`, they will be more than welcome.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 537082)
Time Spent: 5h 40m  (was: 5.5h)

> Separate HiveMetastore Thrift and Driver logic
> --
>
> Key: HIVE-24470
> URL: https://issues.apache.org/jira/browse/HIVE-24470
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Cameron Moberg
>Assignee: Cameron Moberg
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> In the file HiveMetastore.java the majority of the code is a thrift interface 
> rather than the actual logic behind starting hive metastore, this should be 
> moved out into a separate file to clean up the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-24470) Separate HiveMetastore Thrift and Driver logic

2021-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24470?focusedWorklogId=537081=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-537081
 ]

ASF GitHub Bot logged work on HIVE-24470:
-

Author: ASF GitHub Bot
Created on: 17/Jan/21 14:34
Start Date: 17/Jan/21 14:34
Worklog Time Spent: 10m 
  Work Description: dataproc-metastore commented on a change in pull 
request #1787:
URL: https://github.com/apache/hive/pull/1787#discussion_r559190061



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
##
@@ -0,0 +1,10155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import com.codahale.metrics.Counter;
+import com.codahale.metrics.Timer;
+import com.facebook.fb303.FacebookBase;
+import com.facebook.fb303.fb_status;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Splitter;
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
+import com.google.common.collect.Lists;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.AcidConstants;
+import org.apache.hadoop.hive.common.AcidMetaDataFile;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.common.TableName;
+import org.apache.hadoop.hive.common.ValidReaderWriteIdList;
+import org.apache.hadoop.hive.common.ValidWriteIdList;
+import org.apache.hadoop.hive.common.repl.ReplConst;
+import org.apache.hadoop.hive.metastore.api.*;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.events.AbortTxnEvent;
+import org.apache.hadoop.hive.metastore.events.AcidWriteEvent;
+import org.apache.hadoop.hive.metastore.events.AddCheckConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddDefaultConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddForeignKeyEvent;
+import org.apache.hadoop.hive.metastore.events.AddNotNullConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.AddPrimaryKeyEvent;
+import org.apache.hadoop.hive.metastore.events.AddSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.AddUniqueConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AllocWriteIdEvent;
+import org.apache.hadoop.hive.metastore.events.AlterCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.AlterDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.AlterISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.AlterPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.AlterSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.AlterTableEvent;
+import org.apache.hadoop.hive.metastore.events.CommitTxnEvent;
+import org.apache.hadoop.hive.metastore.events.ConfigChangeEvent;
+import org.apache.hadoop.hive.metastore.events.CreateCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.CreateDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.CreateFunctionEvent;
+import org.apache.hadoop.hive.metastore.events.CreateISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.CreateTableEvent;
+import org.apache.hadoop.hive.metastore.events.DeletePartitionColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.DeleteTableColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.DropCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.DropConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.DropDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.DropFunctionEvent;
+import org.apache.hadoop.hive.metastore.events.DropISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.DropPartitionEvent;
+import 

[jira] [Work logged] (HIVE-24470) Separate HiveMetastore Thrift and Driver logic

2021-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24470?focusedWorklogId=537079=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-537079
 ]

ASF GitHub Bot logged work on HIVE-24470:
-

Author: ASF GitHub Bot
Created on: 17/Jan/21 14:29
Start Date: 17/Jan/21 14:29
Worklog Time Spent: 10m 
  Work Description: dataproc-metastore commented on a change in pull 
request #1787:
URL: https://github.com/apache/hive/pull/1787#discussion_r559189559



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
##
@@ -0,0 +1,10155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import com.codahale.metrics.Counter;
+import com.codahale.metrics.Timer;
+import com.facebook.fb303.FacebookBase;
+import com.facebook.fb303.fb_status;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Splitter;
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
+import com.google.common.collect.Lists;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.AcidConstants;
+import org.apache.hadoop.hive.common.AcidMetaDataFile;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.common.TableName;
+import org.apache.hadoop.hive.common.ValidReaderWriteIdList;
+import org.apache.hadoop.hive.common.ValidWriteIdList;
+import org.apache.hadoop.hive.common.repl.ReplConst;
+import org.apache.hadoop.hive.metastore.api.*;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.events.AbortTxnEvent;
+import org.apache.hadoop.hive.metastore.events.AcidWriteEvent;
+import org.apache.hadoop.hive.metastore.events.AddCheckConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddDefaultConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddForeignKeyEvent;
+import org.apache.hadoop.hive.metastore.events.AddNotNullConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.AddPrimaryKeyEvent;
+import org.apache.hadoop.hive.metastore.events.AddSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.AddUniqueConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AllocWriteIdEvent;
+import org.apache.hadoop.hive.metastore.events.AlterCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.AlterDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.AlterISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.AlterPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.AlterSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.AlterTableEvent;
+import org.apache.hadoop.hive.metastore.events.CommitTxnEvent;
+import org.apache.hadoop.hive.metastore.events.ConfigChangeEvent;
+import org.apache.hadoop.hive.metastore.events.CreateCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.CreateDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.CreateFunctionEvent;
+import org.apache.hadoop.hive.metastore.events.CreateISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.CreateTableEvent;
+import org.apache.hadoop.hive.metastore.events.DeletePartitionColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.DeleteTableColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.DropCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.DropConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.DropDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.DropFunctionEvent;
+import org.apache.hadoop.hive.metastore.events.DropISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.DropPartitionEvent;
+import