Re: Review Request 53845: 'like any' and 'like all' operators in hive

2017-02-13 Thread Simanchal Das

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/53845/
---

(Updated Feb. 14, 2017, 7:28 a.m.)


Review request for hive, Carl Steinbach and Vineet Garg.


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-15229


In Teradata 'like any' and 'like all' operators are mostly used when we are 
matching a text field with numbers of patterns.
'like any' and 'like all' operator are equivalents of multiple like operator 
like example below.
--like any
select col1 from table1 where col2 like any ('%accountant%', '%accounting%', 
'%retail%', '%bank%', '%insurance%');

--Can be written using multiple like condition 
select col1 from table1 where col2 like '%accountant%' or col2 like 
'%accounting%' or col2 like '%retail%' or col2 like '%bank%' or col2 like 
'%insurance%' ;

--like all
select col1 from table1 where col2 like all ('%accountant%', '%accounting%', 
'%retail%', '%bank%', '%insurance%');

--Can be written using multiple like operator 
select col1 from table1 where col2 like '%accountant%' and col2 like 
'%accounting%' and col2 like '%retail%' and col2 like '%bank%' and col2 like 
'%insurance%' ;

Problem statement:

Now a days so many data warehouse projects are being migrated from Teradata to 
Hive.
Always Data engineer and Business analyst are searching for these two operator.
If we introduce these two operator in hive then so many scripts will be 
migrated smoothly instead of converting these operators to multiple like 
operators.


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java 0f05160 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g f80642b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g eb81393 
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 81efadc 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java f979c14 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLikeAll.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLikeAny.java 
PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFLikeAll.java 
PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFLikeAny.java 
PRE-CREATION 
  ql/src/test/queries/clientnegative/udf_likeall_wrong1.q PRE-CREATION 
  ql/src/test/queries/clientnegative/udf_likeany_wrong1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/udf_likeall.q PRE-CREATION 
  ql/src/test/queries/clientpositive/udf_likeany.q PRE-CREATION 
  ql/src/test/results/clientnegative/udf_likeall_wrong1.q.out PRE-CREATION 
  ql/src/test/results/clientnegative/udf_likeany_wrong1.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/udf_likeall.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/udf_likeany.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/53845/diff/


Testing
---

Junit test cases and query.q files are attached


Thanks,

Simanchal Das



[jira] [Created] (HIVE-15908) OperationLog's LogFile writer should have autoFlush turned on

2017-02-13 Thread Harsh J (JIRA)
Harsh J created HIVE-15908:
--

 Summary: OperationLog's LogFile writer should have autoFlush 
turned on
 Key: HIVE-15908
 URL: https://issues.apache.org/jira/browse/HIVE-15908
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor


The HS2 offers an API to fetch Operation Log results from the maintained 
OperationLog file. The reader used inside class OperationLog$LogFile class 
reads line-by-line on its input stream, for any lines available from the OS's 
file input perspective.

The writer inside the same class uses PrintStream to write to the file in 
parallel. However, the PrintStream constructor used sets PrintStream's 
{{autoFlush}} feature in an OFF state. This causes the BufferedWriter used by 
PrintStream to accumulate 8k worth of bytes in memory as the buffer before 
flushing the writes to disk, causing a slowness in the logs streamed back to 
the client. Every line must be ideally flushed entirely as-its-written, for a 
smoother experience.

I suggest changing the line inside {{OperationLog$LogFile}} that appears as 
below:

{code}
out = new PrintStream(new FileOutputStream(file));
{code}

Into:

{code}
out = new PrintStream(new FileOutputStream(file), true);
{code}

This will cause it to use the described autoFlush feature of PrintStream and 
make for a better reader-log-results-streaming experience: 
https://docs.oracle.com/javase/7/docs/api/java/io/PrintStream.html#PrintStream(java.io.OutputStream,%20boolean)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15907) hive-shell couldn't get new table columns when perform alter ...add columns

2017-02-13 Thread Saijin Huang (JIRA)
Saijin Huang created HIVE-15907:
---

 Summary: hive-shell couldn't get new table columns when perform 
alter ...add columns
 Key: HIVE-15907
 URL: https://issues.apache.org/jira/browse/HIVE-15907
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Saijin Huang


a cluster has 4 hive nodes.In a hive node1,i perform alter table t_test add  
columns(t_text int) using beeline.Then perform desc t_test  using hive-shell  
in node2,i found the result didn't has the new columns.(node2 has a local 
metastore).But i  find the columns information in mysql .
Then i restart the metastore in node2 ,the question was gone. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] hive pull request #146: HIVE-15906 : hrift code regeneration to include new ...

2017-02-13 Thread anishek
GitHub user anishek opened a pull request:

https://github.com/apache/hive/pull/146

HIVE-15906 : hrift code regeneration to include new protocol version



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anishek/hive HIVE-15906

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/146.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #146


commit 80c1b962e782dc52ad8e71741571acb8d9944268
Author: Anishek Agarwal 
Date:   2017-02-14T05:22:15Z

HIVE-15906 : hrift code regeneration to include new protocol version




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HIVE-15906) thrift code regeneration after change in HIVE-15473

2017-02-13 Thread anishek (JIRA)
anishek created HIVE-15906:
--

 Summary: thrift code regeneration after change in HIVE-15473
 Key: HIVE-15906
 URL: https://issues.apache.org/jira/browse/HIVE-15906
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: anishek
Assignee: anishek
Priority: Blocker


HIVE-15473  changed the protocol version in thrift file. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15905) Inefficient plan for correlated subqueries

2017-02-13 Thread Vineet Garg (JIRA)
Vineet Garg created HIVE-15905:
--

 Summary: Inefficient plan for correlated subqueries
 Key: HIVE-15905
 URL: https://issues.apache.org/jira/browse/HIVE-15905
 Project: Hive
  Issue Type: Sub-task
  Components: Query Planning
Reporter: Vineet Garg
Assignee: Vineet Garg


Currently Calcite produces an un-necessary join to generate correlated values 
for inner query. More details are at CALCITE-1494.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15904) select query throwing Null Pointer Exception from org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan

2017-02-13 Thread Aswathy Chellammal Sreekumar (JIRA)
Aswathy Chellammal Sreekumar created HIVE-15904:
---

 Summary: select query throwing Null Pointer Exception from 
org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan
 Key: HIVE-15904
 URL: https://issues.apache.org/jira/browse/HIVE-15904
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Aswathy Chellammal Sreekumar
Assignee: Jason Dere


Following query failing with Null Pointer Exception from 
org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan

Attaching create table statements for table_1 and table_18

Query:
SELECT
COALESCE(498, LEAD(COALESCE(-973, -684, 515)) OVER (PARTITION BY (t2.int_col_10 
+ t1.smallint_col_50) ORDER BY (t2.int_col_10 + t1.smallint_col_50), 
FLOOR(t1.double_col_16) DESC), 524) AS int_col,
(t2.int_col_10) + (t1.smallint_col_50) AS int_col_1,
FLOOR(t1.double_col_16) AS float_col,
COALESCE(SUM(COALESCE(62, -380, -435)) OVER (PARTITION BY (t2.int_col_10 + 
t1.smallint_col_50) ORDER BY (t2.int_col_10 + t1.smallint_col_50) DESC, 
FLOOR(t1.double_col_16) DESC ROWS BETWEEN UNBOUNDED PRECEDING AND 48 
FOLLOWING), 704) AS int_col_2
FROM table_1 t1
INNER JOIN table_18 t2 ON (((t2.tinyint_col_15) = (t1.bigint_col_7)) AND
((t2.decimal2709_col_9) = (t1.decimal2016_col_26))) AND
((t2.tinyint_col_20) = (t1.tinyint_col_3))
WHERE (t2.smallint_col_19) IN (SELECT
COALESCE(-92, -994) AS int_col
FROM table_1 tt1
INNER JOIN table_18 tt2 ON (tt2.decimal1911_col_16) = (tt1.decimal2612_col_77)
WHERE (t1.timestamp_col_9) = (tt2.timestamp_col_18));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15903) Compute table stats when user computes column stats

2017-02-13 Thread Pengcheng Xiong (JIRA)
Pengcheng Xiong created HIVE-15903:
--

 Summary: Compute table stats when user computes column stats
 Key: HIVE-15903
 URL: https://issues.apache.org/jira/browse/HIVE-15903
 Project: Hive
  Issue Type: Bug
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15902) Select query involving date throwing Hive 2 Internal error: unsupported conversion from type: date

2017-02-13 Thread Aswathy Chellammal Sreekumar (JIRA)
Aswathy Chellammal Sreekumar created HIVE-15902:
---

 Summary: Select query involving date throwing Hive 2 Internal 
error: unsupported conversion from type: date
 Key: HIVE-15902
 URL: https://issues.apache.org/jira/browse/HIVE-15902
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 2.1.0
Reporter: Aswathy Chellammal Sreekumar
Assignee: Jason Dere


The following query is throwing Hive 2 Internal error: unsupported conversion 
from type: date

Query:

create table table_one (ts timestamp, dt date) stored as orc;
insert into table_one values ('2034-08-04 17:42:59','2038-07-01');
insert into table_one values ('2031-02-07 13:02:38','2072-10-19');


create table table_two (ts timestamp, dt date) stored as orc;
insert into table_two values ('2069-04-01 09:05:54','1990-10-12');
insert into table_two values ('2031-02-07 13:02:38','2072-10-19');

create table table_three as
select count(*) from table_one
group by ts,dt
having dt in (select dt from table_two);

Error while running task ( failure ) : 
attempt_1486991777989_0184_18_02_00_0:java.lang.RuntimeException: 
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
Hive Runtime Error while processing row
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1833)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at 
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:95)
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:70)
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:420)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
... 15 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row
at 
org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:883)
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:86)
... 18 more
Caused by: java.lang.RuntimeException: Hive 2 Internal error: unsupported 
conversion from type: date
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getLong(PrimitiveObjectInspectorUtils.java:770)
at 
org.apache.hadoop.hive.ql.exec.vector.expressions.gen.FilterLongColumnBetweenDynamicValue.evaluate(FilterLongColumnBetweenDynamicValue.java:82)
at 
org.apache.hadoop.hive.ql.exec.vector.expressions.FilterExprAndExpr.evaluate(FilterExprAndExpr.java:39)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:112)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:883)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:783)
... 19 more



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15901) LLAP: incorrect usage of gap cache

2017-02-13 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-15901:
---

 Summary: LLAP: incorrect usage of gap cache
 Key: HIVE-15901
 URL: https://issues.apache.org/jira/browse/HIVE-15901
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56140: Can't order by an unselected column

2017-02-13 Thread pengcheng xiong

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56140/
---

(Updated Feb. 14, 2017, 12:52 a.m.)


Review request for hive and Ashutosh Chauhan.


Repository: hive-git


Description
---

HIVE-15160


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java e7687be 
  ql/src/java/org/apache/hadoop/hive/ql/parse/RowResolver.java e14f1cf 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 03ab0c1 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java f979c14 
  ql/src/test/queries/clientpositive/explainuser_1.q a6fbb54 
  ql/src/test/queries/clientpositive/order_by_expr.q PRE-CREATION 
  ql/src/test/results/clientpositive/groupby_grouping_sets_grouping.q.out 
62f40cd 
  ql/src/test/results/clientpositive/llap/explainuser_1.q.out 21fd10c 
  ql/src/test/results/clientpositive/llap/vector_decimal_2.q.out 144356c 
  ql/src/test/results/clientpositive/llap/vector_decimal_round.q.out 134b008 
  ql/src/test/results/clientpositive/llap/vector_interval_arithmetic.q.out 
ee8aa0c 
  ql/src/test/results/clientpositive/order_by_expr.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/perf/query36.q.out b356628 
  ql/src/test/results/clientpositive/perf/query86.q.out 6377c43 
  ql/src/test/results/clientpositive/perf/query89.q.out 7bc8700 
  ql/src/test/results/clientpositive/vector_decimal_round.q.out d778f63 

Diff: https://reviews.apache.org/r/56140/diff/


Testing
---


Thanks,

pengcheng xiong



PreCommit is failing

2017-02-13 Thread Wei Zheng
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 8.934 s
[INFO] Finished at: 2017-02-13T23:48:40+00:00
[INFO] Final Memory: 45M/1447M
[INFO] 
+ local 
'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-1.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
+ java -cp 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-1.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
 org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir 
/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target
 --password '' --testHandle PreCommit-HIVE-Build-3524 --endpoint 
http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint 
http://104.198.109.242/logs/ --profile master-mr2 --patch 
https://issues.apache.org/jira/secure/attachment/12852440/HIVE-15891.1.patch 
--jira HIVE-15891
Build was aborted
ERROR: Step ?Publish JUnit test result report? failed: no workspace for 
PreCommit-HIVE-Build #3524
ERROR: H24 is offline; 
cannot locate JDK 1.7 (latest)
[description-setter] Description 
set:
 HIVE-15891  /   master-mr2
ERROR: H24 is offline; cannot locate JDK 1.7 (latest)
Finished: ABORTED



Thanks,
Wei


[jira] [Created] (HIVE-15900) Tez job progress included in stdout instead of stderr

2017-02-13 Thread Aswathy Chellammal Sreekumar (JIRA)
Aswathy Chellammal Sreekumar created HIVE-15900:
---

 Summary: Tez job progress included in stdout instead of stderr
 Key: HIVE-15900
 URL: https://issues.apache.org/jira/browse/HIVE-15900
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 2.1.0
Reporter: Aswathy Chellammal Sreekumar


Tez job progress messages are getting updated to stdout instead of stderr

Attaching output file for below command, with the tez job status printed

/usr/hdp/current/hive-server2-hive2/bin/beeline -n  -p  -u 
" stdout



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15899) check CTAS over acid table

2017-02-13 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-15899:
-

 Summary: check CTAS over acid table 
 Key: HIVE-15899
 URL: https://issues.apache.org/jira/browse/HIVE-15899
 Project: Hive
  Issue Type: Task
Reporter: Eugene Koifman
Assignee: Eugene Koifman






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15898) add Type2 SCD merge tests

2017-02-13 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-15898:
-

 Summary: add Type2 SCD merge tests
 Key: HIVE-15898
 URL: https://issues.apache.org/jira/browse/HIVE-15898
 Project: Hive
  Issue Type: Test
  Components: Transactions
Reporter: Eugene Koifman
Assignee: Eugene Koifman






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15897) Add tests for partitioned acid tables with schema evolution to UTs

2017-02-13 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-15897:
-

 Summary: Add tests for partitioned acid tables with schema 
evolution to UTs
 Key: HIVE-15897
 URL: https://issues.apache.org/jira/browse/HIVE-15897
 Project: Hive
  Issue Type: Test
  Components: Transactions
Reporter: Eugene Koifman
Assignee: Eugene Koifman






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15896) LLAP: improved failures when security is set up incorrectly

2017-02-13 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-15896:
---

 Summary: LLAP: improved failures when security is set up 
incorrectly
 Key: HIVE-15896
 URL: https://issues.apache.org/jira/browse/HIVE-15896
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin


Right now it may fail in ACL check. We can fail earlier and also improve the 
message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15895) Use HDFS for stats collection temp dir on blob storage

2017-02-13 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-15895:
---

 Summary: Use HDFS for stats collection temp dir on blob storage
 Key: HIVE-15895
 URL: https://issues.apache.org/jira/browse/HIVE-15895
 Project: Hive
  Issue Type: Improvement
Reporter: Ashutosh Chauhan






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15894) Add logical semijoin config in sqlstd safe list

2017-02-13 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-15894:
---

 Summary: Add logical semijoin config in sqlstd safe list 
 Key: HIVE-15894
 URL: https://issues.apache.org/jira/browse/HIVE-15894
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15893) Followup on HIVE-15671

2017-02-13 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-15893:
--

 Summary: Followup on HIVE-15671
 Key: HIVE-15893
 URL: https://issues.apache.org/jira/browse/HIVE-15893
 Project: Hive
  Issue Type: Improvement
  Components: Spark
Affects Versions: 2.2.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang


In HIVE-15671, we fixed a type where server.connect.timeout is used in the 
place of client.connect.timeout. This might solve some potential problems, but 
the original problem reported in HIVE-15671 might still exist. (Not sure if 
HIVE-15860 helps). Here is the proposal suggested by Marcelo:
{quote}
bq: server detecting a driver problem after it has connected back to the server.

Hmm. That is definitely not any of the "connect" timeouts, which probably means 
it isn't configured and is just using netty's default (which is probably no 
timeout?). Would probably need something using 
io.netty.handler.timeout.IdleStateHandler, and also some periodic "ping" so 
that the connection isn't torn down without reason.
{code}

We will use this JIRA to track the issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15892) Vectorization: Fast Hash tables need to do bounds checking during expand

2017-02-13 Thread Matt McCline (JIRA)
Matt McCline created HIVE-15892:
---

 Summary: Vectorization: Fast Hash tables need to do bounds 
checking during expand
 Key: HIVE-15892
 URL: https://issues.apache.org/jira/browse/HIVE-15892
 Project: Hive
  Issue Type: Bug
  Components: Hive
Reporter: Matt McCline
Assignee: Matt McCline
Priority: Critical


VectorMapJoinFastLongHashTable line 165 gets NegativeArraySizeException:
{code}
long[] newSlotPairs = new long[newSlotPairArraySize];
{code}

We need to add a size check... Java math for this wrapped around to negative:
{code}
int newSlotPairArraySize = newLogicalHashBucketCount * 2;
{code}





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15891) Detect query rewrite scenario for UPDATE/DELETE/MERGE and fail fast

2017-02-13 Thread Wei Zheng (JIRA)
Wei Zheng created HIVE-15891:


 Summary: Detect query rewrite scenario for UPDATE/DELETE/MERGE and 
fail fast
 Key: HIVE-15891
 URL: https://issues.apache.org/jira/browse/HIVE-15891
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 2.2.0
Reporter: Wei Zheng
Assignee: Wei Zheng


Currently ACID UpdateDeleteSemanticAnalyzer directly manipulates the AST tree 
but it's different from the general approach of modifying the token stream and 
thus will cause AST tree mismatch if there is any rewrite happening after 
UpdateDeleteSemanticAnalyzer.

The long term solution will be to rewrite the AST handling logic in 
UpdateDeleteSemanticAnalyzer, to make it consistent with the general approach.

This ticket will for now detect the error prone cases and fail early. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56334: HIVE-12767: Implement table property to address Parquet int96 timestamp bug

2017-02-13 Thread Zoltan Ivanfi

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56334/#review165334
---


Ship it!




Thanks!

- Zoltan Ivanfi


On Feb. 13, 2017, 3:21 p.m., Barna Zsombor Klara wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56334/
> ---
> 
> (Updated Feb. 13, 2017, 3:21 p.m.)
> 
> 
> Review request for hive, Ryan Blue and Sergio Pena.
> 
> 
> Bugs: HIVE-12767
> https://issues.apache.org/jira/browse/HIVE-12767
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This is a followup on this review request: https://reviews.apache.org/r/41821
> The following exit criteria is addressed in this patch:
> 
> - Hive will read Parquet MR int96 timestamp data and adjust values using a 
> time zone from a table property, if set, or using the local time zone if it 
> is absent. No adjustment will be applied to data written by Impala.
> - Hive will write Parquet int96 timestamps using a time zone adjustment from 
> the same table property, if set, or using the local time zone if it is 
> absent. This keeps the data in the table consistent.
> - New tables created by Hive will set the table property to UTC if the global 
> option to set the property for new tables is enabled.
> - Tables created using CREATE TABLE and CREATE TABLE LIKE FILE will not set 
> the property unless the global setting to do so is enabled.
> - Tables created using CREATE TABLE LIKE  will copy the property 
> of the table that is copied.
> 
> To set the timezone table property, use this:
>   create table tbl1 (ts timestamp) stored as parquet tblproperties 
> ('parquet.mr.int96.write.zone'='PST');
> 
> To set UTC as default timezone table property on new tables created, use 
> this: 
>   set parquet.mr.int96.enable.utc.write.zone=true;
>   create table tbl2 (ts timestamp) stored as parquet;
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
> 0e4f1f6610d2cdf543f106061a21ab465899737d 
>   data/files/impala_int96_timestamp.parq PRE-CREATION 
>   
> itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java
>  a14b7900afb00a7d304b0dc4f6482a2b87716919 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
> adabe70fa8f0fe1b990c6ac578a14ff5af06fc93 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java
>  379a9135d9c631b2f473976b00f3dc87f9fec0c4 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java 
> 167f9b6516ac093fa30091daf6965de25e3eccb3 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
> 76d93b8e02a98c95da8a534f2820cd3e77b4bb43 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
>  604cbbcc2a9daa8594397e315cc4fd8064cc5005 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java
>  ac430a67682d3dcbddee89ce132fc0c1b421e368 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetTableUtils.java 
> PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/timestamp/NanoTimeUtils.java 
> 3fd75d24f3fda36967e4957e650aec19050b22f8 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
>  b6a1a7a64db6db0bf06d2eea70a308b88f06156e 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
>  3d5c6e6a092dd6a0303fadc6a244dad2e31cd853 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java
>  f4621e5dbb81e8d58c4572c901ec9d1a7ca8c012 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
>  6b7b50a25e553629f0f492e964cc4913417cb500 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 
> 934ae9f255d0c4ccaa422054fcc9e725873810d4 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestVectorizedColumnReader.java
>  670bfa609704d3001dd171b703b657f57fbd4c74 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/VectorizedColumnReaderTestBase.java
>  f537ceee505c5f41d513df3c89b63453012c9979 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
>  PRE-CREATION 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampUtils.java
>  ec6def5b9ac5f12e6a7cb24c4f4998a6ca6b4a8e 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/timestamp/TestNanoTimeUtils.java
>  PRE-CREATION 
>   ql/src/test/queries/clientpositive/parquet_int96_timestamp.q PRE-CREATION 
>   ql/src/test/queries/clientpositive/parquet_timestamp_conversion.q 
> PRE-CREATION 
>   ql/src/test/results/clientpositive/parquet_int96_timestamp.q.out 
> PRE-CREATION 
>

Re: Review Request 56334: HIVE-12767: Implement table property to address Parquet int96 timestamp bug

2017-02-13 Thread Barna Zsombor Klara

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56334/
---

(Updated Feb. 13, 2017, 3:21 p.m.)


Review request for hive, Ryan Blue and Sergio Pena.


Changes
---

Updated the comment according to Zoltan's comment. :)


Bugs: HIVE-12767
https://issues.apache.org/jira/browse/HIVE-12767


Repository: hive-git


Description
---

This is a followup on this review request: https://reviews.apache.org/r/41821
The following exit criteria is addressed in this patch:

- Hive will read Parquet MR int96 timestamp data and adjust values using a time 
zone from a table property, if set, or using the local time zone if it is 
absent. No adjustment will be applied to data written by Impala.
- Hive will write Parquet int96 timestamps using a time zone adjustment from 
the same table property, if set, or using the local time zone if it is absent. 
This keeps the data in the table consistent.
- New tables created by Hive will set the table property to UTC if the global 
option to set the property for new tables is enabled.
- Tables created using CREATE TABLE and CREATE TABLE LIKE FILE will not set the 
property unless the global setting to do so is enabled.
- Tables created using CREATE TABLE LIKE  will copy the property 
of the table that is copied.

To set the timezone table property, use this:
  create table tbl1 (ts timestamp) stored as parquet tblproperties 
('parquet.mr.int96.write.zone'='PST');

To set UTC as default timezone table property on new tables created, use this: 
  set parquet.mr.int96.enable.utc.write.zone=true;
  create table tbl2 (ts timestamp) stored as parquet;


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
0e4f1f6610d2cdf543f106061a21ab465899737d 
  data/files/impala_int96_timestamp.parq PRE-CREATION 
  
itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java
 a14b7900afb00a7d304b0dc4f6482a2b87716919 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
adabe70fa8f0fe1b990c6ac578a14ff5af06fc93 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java 
379a9135d9c631b2f473976b00f3dc87f9fec0c4 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java 
167f9b6516ac093fa30091daf6965de25e3eccb3 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
76d93b8e02a98c95da8a534f2820cd3e77b4bb43 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
 604cbbcc2a9daa8594397e315cc4fd8064cc5005 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java
 ac430a67682d3dcbddee89ce132fc0c1b421e368 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetTableUtils.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/timestamp/NanoTimeUtils.java 
3fd75d24f3fda36967e4957e650aec19050b22f8 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
 b6a1a7a64db6db0bf06d2eea70a308b88f06156e 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
 3d5c6e6a092dd6a0303fadc6a244dad2e31cd853 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java
 f4621e5dbb81e8d58c4572c901ec9d1a7ca8c012 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java 
6b7b50a25e553629f0f492e964cc4913417cb500 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 
934ae9f255d0c4ccaa422054fcc9e725873810d4 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestVectorizedColumnReader.java
 670bfa609704d3001dd171b703b657f57fbd4c74 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/VectorizedColumnReaderTestBase.java
 f537ceee505c5f41d513df3c89b63453012c9979 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
 PRE-CREATION 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampUtils.java
 ec6def5b9ac5f12e6a7cb24c4f4998a6ca6b4a8e 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/timestamp/TestNanoTimeUtils.java
 PRE-CREATION 
  ql/src/test/queries/clientpositive/parquet_int96_timestamp.q PRE-CREATION 
  ql/src/test/queries/clientpositive/parquet_timestamp_conversion.q 
PRE-CREATION 
  ql/src/test/results/clientpositive/parquet_int96_timestamp.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/parquet_timestamp_conversion.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/56334/diff/


Testing
---

qtest and unit tests added.


Thanks,

Barna Zsombor Klara



Re: Review Request 56118: DROP TABLE in hive doesn't Throw Error

2017-02-13 Thread Adam Szita


> On Feb. 10, 2017, 7:32 p.m., Vihang Karajgaonkar wrote:
> > metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java, 
> > line 1679
> > 
> >
> > I agree with Aihua here. As long as the table metadata is dropped, from 
> > the client's point of view the table does not exist. The filesystem will 
> > have stale data because it could not be deleted successfully, but that 
> > stale data unusable anyways without the metadata. If we want to notify such 
> > cases to the client, I think it should be a warning at best and not an 
> > error.

Let's consider the following situation:
-The user creates a table, fills it with some data, then drops it (which fails 
silently leaving data behind on disk).
-Then the user decides to recreate the table with a different serde, e.g. Avro 
format. (or even another user could create a table with the same name)
-A simple _select * from table_ will fail with the following: _"Error: 
java.io.IOException: java.io.IOException: Not a data file. (state=,code=0)"_
-User will get quite confused since they don't know that a previous drop table 
failure has caused this 

So as I see it, we should either:
-Remove table from HMS, and give back a very simple exception e.g. "Table 
definition is deleted, but some data files remained on disk, please clean up 
manually" (we either succeed or throw an exception, in order to signal warning 
thrift contract would have to be amended which is an overkill for this issue :) 
)
-Leave this functionality as is, but add a feature to one of the existing tools 
(e.g. schematool, metatool) that can detect orphaned data leftovers on disk by 
comparing HDFS content with HMS (obviously this should go to a separate jira)

I would've chosen the first option since it's really simple - just a 
notification to the user - and we could even make it configurable e.g. 
hive.metastore.droptable.verbose=true/false


- Adam


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56118/#review165150
---


On Feb. 3, 2017, 2:22 p.m., Adam Szita wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56118/
> ---
> 
> (Updated Feb. 3, 2017, 2:22 p.m.)
> 
> 
> Review request for hive, Aihua Xu, Peter Vary, and Sergio Pena.
> 
> 
> Bugs: HIVE-14181
> https://issues.apache.org/jira/browse/HIVE-14181
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Failure during table drop doesn't throw errors and results in success - some 
> times data resides in warehouse, but table (meta data) is removed from 
> metastore resulting in incosistency
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
> 53b9b0c6962c9b1cd2eef1cb71687ec0245cfac3 
>   
> itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
>  af125c38236582ba532f5e3de3d2ba724f38b101 
>   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
> f8c3c4e48db0df9d6c18801bcd61f9e5dc6eb7c2 
> 
> Diff: https://reviews.apache.org/r/56118/diff/
> 
> 
> Testing
> ---
> 
> -Added test case
> -Tested on cluster
> 
> 
> Thanks,
> 
> Adam Szita
> 
>



[jira] [Created] (HIVE-15890) hive permission problem

2017-02-13 Thread Vladimir Tselm (JIRA)
Vladimir Tselm created HIVE-15890:
-

 Summary: hive permission problem
 Key: HIVE-15890
 URL: https://issues.apache.org/jira/browse/HIVE-15890
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 2.1.1
Reporter: Vladimir Tselm


i hive with ldap Authentication. 
User hadoop created database "greenh" and table "test"
user hadoop_ro didnt have permission to drop this table
i check it: 

EXPLAIN  AUTHORIZATION  drop table test;

 Permission denied: Principal [name=hadoop_ro, type=USER] does not have 
following privileges for operation DROPTABLE [[OBJECT OWNERSHIP] on Object 
[type=TABLE_OR_VIEW, name=greenh.test]]  |

but user hadoop can drop this table:
drop table test; !!!
Help me please, is it bug or my error on configuration?




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56334: HIVE-12767: Implement table property to address Parquet int96 timestamp bug

2017-02-13 Thread Zoltan Ivanfi

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56334/#review165327
---




ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetTableUtils.java 
(line 21)


You wrote in your last update: "This is not a TimeZone we convert into and 
print out, rather a delta, an adjustment we use, or more precisely the lack of 
an adjustment." This is such a good description that it should be added as a 
comment.


- Zoltan Ivanfi


On Feb. 13, 2017, 1:59 p.m., Barna Zsombor Klara wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56334/
> ---
> 
> (Updated Feb. 13, 2017, 1:59 p.m.)
> 
> 
> Review request for hive, Ryan Blue and Sergio Pena.
> 
> 
> Bugs: HIVE-12767
> https://issues.apache.org/jira/browse/HIVE-12767
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This is a followup on this review request: https://reviews.apache.org/r/41821
> The following exit criteria is addressed in this patch:
> 
> - Hive will read Parquet MR int96 timestamp data and adjust values using a 
> time zone from a table property, if set, or using the local time zone if it 
> is absent. No adjustment will be applied to data written by Impala.
> - Hive will write Parquet int96 timestamps using a time zone adjustment from 
> the same table property, if set, or using the local time zone if it is 
> absent. This keeps the data in the table consistent.
> - New tables created by Hive will set the table property to UTC if the global 
> option to set the property for new tables is enabled.
> - Tables created using CREATE TABLE and CREATE TABLE LIKE FILE will not set 
> the property unless the global setting to do so is enabled.
> - Tables created using CREATE TABLE LIKE  will copy the property 
> of the table that is copied.
> 
> To set the timezone table property, use this:
>   create table tbl1 (ts timestamp) stored as parquet tblproperties 
> ('parquet.mr.int96.write.zone'='PST');
> 
> To set UTC as default timezone table property on new tables created, use 
> this: 
>   set parquet.mr.int96.enable.utc.write.zone=true;
>   create table tbl2 (ts timestamp) stored as parquet;
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
> 0e4f1f6610d2cdf543f106061a21ab465899737d 
>   data/files/impala_int96_timestamp.parq PRE-CREATION 
>   
> itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java
>  a14b7900afb00a7d304b0dc4f6482a2b87716919 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
> adabe70fa8f0fe1b990c6ac578a14ff5af06fc93 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java
>  379a9135d9c631b2f473976b00f3dc87f9fec0c4 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java 
> 167f9b6516ac093fa30091daf6965de25e3eccb3 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
> 76d93b8e02a98c95da8a534f2820cd3e77b4bb43 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
>  604cbbcc2a9daa8594397e315cc4fd8064cc5005 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java
>  ac430a67682d3dcbddee89ce132fc0c1b421e368 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetTableUtils.java 
> PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/timestamp/NanoTimeUtils.java 
> 3fd75d24f3fda36967e4957e650aec19050b22f8 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
>  b6a1a7a64db6db0bf06d2eea70a308b88f06156e 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
>  3d5c6e6a092dd6a0303fadc6a244dad2e31cd853 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java
>  f4621e5dbb81e8d58c4572c901ec9d1a7ca8c012 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
>  6b7b50a25e553629f0f492e964cc4913417cb500 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 
> 934ae9f255d0c4ccaa422054fcc9e725873810d4 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestVectorizedColumnReader.java
>  670bfa609704d3001dd171b703b657f57fbd4c74 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/VectorizedColumnReaderTestBase.java
>  f537ceee505c5f41d513df3c89b63453012c9979 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
>  PRE-CREATION 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampUtils.java
>  ec6def5b9ac5f12e6a7cb24c4f4998

Re: Review Request 56334: HIVE-12767: Implement table property to address Parquet int96 timestamp bug

2017-02-13 Thread Barna Zsombor Klara

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56334/
---

(Updated Feb. 13, 2017, 1:59 p.m.)


Review request for hive, Ryan Blue and Sergio Pena.


Changes
---

Renamed ParquetTableUtils.PARQUET_INT96_DEFAULT_WRITE_ZONE constant to make its 
purpose clearer. This is not a TimeZone we convert into and print out, rather a 
delta, an adjustment we use, or more precisely the lack of an adjustment.


Bugs: HIVE-12767
https://issues.apache.org/jira/browse/HIVE-12767


Repository: hive-git


Description
---

This is a followup on this review request: https://reviews.apache.org/r/41821
The following exit criteria is addressed in this patch:

- Hive will read Parquet MR int96 timestamp data and adjust values using a time 
zone from a table property, if set, or using the local time zone if it is 
absent. No adjustment will be applied to data written by Impala.
- Hive will write Parquet int96 timestamps using a time zone adjustment from 
the same table property, if set, or using the local time zone if it is absent. 
This keeps the data in the table consistent.
- New tables created by Hive will set the table property to UTC if the global 
option to set the property for new tables is enabled.
- Tables created using CREATE TABLE and CREATE TABLE LIKE FILE will not set the 
property unless the global setting to do so is enabled.
- Tables created using CREATE TABLE LIKE  will copy the property 
of the table that is copied.

To set the timezone table property, use this:
  create table tbl1 (ts timestamp) stored as parquet tblproperties 
('parquet.mr.int96.write.zone'='PST');

To set UTC as default timezone table property on new tables created, use this: 
  set parquet.mr.int96.enable.utc.write.zone=true;
  create table tbl2 (ts timestamp) stored as parquet;


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
0e4f1f6610d2cdf543f106061a21ab465899737d 
  data/files/impala_int96_timestamp.parq PRE-CREATION 
  
itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java
 a14b7900afb00a7d304b0dc4f6482a2b87716919 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
adabe70fa8f0fe1b990c6ac578a14ff5af06fc93 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java 
379a9135d9c631b2f473976b00f3dc87f9fec0c4 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java 
167f9b6516ac093fa30091daf6965de25e3eccb3 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
76d93b8e02a98c95da8a534f2820cd3e77b4bb43 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
 604cbbcc2a9daa8594397e315cc4fd8064cc5005 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java
 ac430a67682d3dcbddee89ce132fc0c1b421e368 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetTableUtils.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/timestamp/NanoTimeUtils.java 
3fd75d24f3fda36967e4957e650aec19050b22f8 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
 b6a1a7a64db6db0bf06d2eea70a308b88f06156e 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
 3d5c6e6a092dd6a0303fadc6a244dad2e31cd853 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java
 f4621e5dbb81e8d58c4572c901ec9d1a7ca8c012 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java 
6b7b50a25e553629f0f492e964cc4913417cb500 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 
934ae9f255d0c4ccaa422054fcc9e725873810d4 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestVectorizedColumnReader.java
 670bfa609704d3001dd171b703b657f57fbd4c74 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/VectorizedColumnReaderTestBase.java
 f537ceee505c5f41d513df3c89b63453012c9979 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
 PRE-CREATION 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampUtils.java
 ec6def5b9ac5f12e6a7cb24c4f4998a6ca6b4a8e 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/timestamp/TestNanoTimeUtils.java
 PRE-CREATION 
  ql/src/test/queries/clientpositive/parquet_int96_timestamp.q PRE-CREATION 
  ql/src/test/queries/clientpositive/parquet_timestamp_conversion.q 
PRE-CREATION 
  ql/src/test/results/clientpositive/parquet_int96_timestamp.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/parquet_timestamp_conversion.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/56334/diff/


Testing
---

qtest and unit tests added.


Thanks,

Barna Zsombor Klara



Re: Review Request 56334: HIVE-12767: Implement table property to address Parquet int96 timestamp bug

2017-02-13 Thread Barna Zsombor Klara

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56334/
---

(Updated Feb. 13, 2017, 12:59 p.m.)


Review request for hive, Ryan Blue and Sergio Pena.


Changes
---

Clarified comments in NanoTimeUtils and fixed Zoltan's comment about the 
getUTCCalendar method.


Bugs: HIVE-12767
https://issues.apache.org/jira/browse/HIVE-12767


Repository: hive-git


Description
---

This is a followup on this review request: https://reviews.apache.org/r/41821
The following exit criteria is addressed in this patch:

- Hive will read Parquet MR int96 timestamp data and adjust values using a time 
zone from a table property, if set, or using the local time zone if it is 
absent. No adjustment will be applied to data written by Impala.
- Hive will write Parquet int96 timestamps using a time zone adjustment from 
the same table property, if set, or using the local time zone if it is absent. 
This keeps the data in the table consistent.
- New tables created by Hive will set the table property to UTC if the global 
option to set the property for new tables is enabled.
- Tables created using CREATE TABLE and CREATE TABLE LIKE FILE will not set the 
property unless the global setting to do so is enabled.
- Tables created using CREATE TABLE LIKE  will copy the property 
of the table that is copied.

To set the timezone table property, use this:
  create table tbl1 (ts timestamp) stored as parquet tblproperties 
('parquet.mr.int96.write.zone'='PST');

To set UTC as default timezone table property on new tables created, use this: 
  set parquet.mr.int96.enable.utc.write.zone=true;
  create table tbl2 (ts timestamp) stored as parquet;


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
0e4f1f6610d2cdf543f106061a21ab465899737d 
  data/files/impala_int96_timestamp.parq PRE-CREATION 
  
itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java
 a14b7900afb00a7d304b0dc4f6482a2b87716919 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
adabe70fa8f0fe1b990c6ac578a14ff5af06fc93 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java 
379a9135d9c631b2f473976b00f3dc87f9fec0c4 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java 
167f9b6516ac093fa30091daf6965de25e3eccb3 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
76d93b8e02a98c95da8a534f2820cd3e77b4bb43 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
 604cbbcc2a9daa8594397e315cc4fd8064cc5005 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java
 ac430a67682d3dcbddee89ce132fc0c1b421e368 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetTableUtils.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/timestamp/NanoTimeUtils.java 
3fd75d24f3fda36967e4957e650aec19050b22f8 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
 b6a1a7a64db6db0bf06d2eea70a308b88f06156e 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
 3d5c6e6a092dd6a0303fadc6a244dad2e31cd853 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java
 f4621e5dbb81e8d58c4572c901ec9d1a7ca8c012 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java 
6b7b50a25e553629f0f492e964cc4913417cb500 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 
934ae9f255d0c4ccaa422054fcc9e725873810d4 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestVectorizedColumnReader.java
 670bfa609704d3001dd171b703b657f57fbd4c74 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/VectorizedColumnReaderTestBase.java
 f537ceee505c5f41d513df3c89b63453012c9979 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
 PRE-CREATION 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampUtils.java
 ec6def5b9ac5f12e6a7cb24c4f4998a6ca6b4a8e 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/timestamp/TestNanoTimeUtils.java
 PRE-CREATION 
  ql/src/test/queries/clientpositive/parquet_int96_timestamp.q PRE-CREATION 
  ql/src/test/queries/clientpositive/parquet_timestamp_conversion.q 
PRE-CREATION 
  ql/src/test/results/clientpositive/parquet_int96_timestamp.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/parquet_timestamp_conversion.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/56334/diff/


Testing
---

qtest and unit tests added.


Thanks,

Barna Zsombor Klara



Re: Review Request 56334: HIVE-12767: Implement table property to address Parquet int96 timestamp bug

2017-02-13 Thread Barna Zsombor Klara


> On Feb. 10, 2017, 4:56 p.m., Zoltan Ivanfi wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java,
> >  line 150
> > 
> >
> > This issue is scattered around in different parts of the code, but this 
> > is where I first noticed it: PARQUET_INT96_DEFAULT_WRITE_ZONE is set to UTC 
> > by default and the time zone adjustment is set to this value if not 
> > specified by a table property.
> > 
> > This does not match the exit criteria, which states that the local 
> > timezone must be used if the table property is missing. (There is a 
> > separate global switch controlling the default value of the table property 
> > to set when creating new tables, but that's a different thing.)

I think this setting is the correct one. If you check NanoTimeUtils the 
calendar we pass in is used to adjust the default calendar (compared to UTC). 
If we want to keep it as default we should pass in the 0 adjustment which is 
UTC/GMT.


- Barna Zsombor


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56334/#review165133
---


On Feb. 10, 2017, 1:41 p.m., Barna Zsombor Klara wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56334/
> ---
> 
> (Updated Feb. 10, 2017, 1:41 p.m.)
> 
> 
> Review request for hive, Ryan Blue and Sergio Pena.
> 
> 
> Bugs: HIVE-12767
> https://issues.apache.org/jira/browse/HIVE-12767
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This is a followup on this review request: https://reviews.apache.org/r/41821
> The following exit criteria is addressed in this patch:
> 
> - Hive will read Parquet MR int96 timestamp data and adjust values using a 
> time zone from a table property, if set, or using the local time zone if it 
> is absent. No adjustment will be applied to data written by Impala.
> - Hive will write Parquet int96 timestamps using a time zone adjustment from 
> the same table property, if set, or using the local time zone if it is 
> absent. This keeps the data in the table consistent.
> - New tables created by Hive will set the table property to UTC if the global 
> option to set the property for new tables is enabled.
> - Tables created using CREATE TABLE and CREATE TABLE LIKE FILE will not set 
> the property unless the global setting to do so is enabled.
> - Tables created using CREATE TABLE LIKE  will copy the property 
> of the table that is copied.
> 
> To set the timezone table property, use this:
>   create table tbl1 (ts timestamp) stored as parquet tblproperties 
> ('parquet.mr.int96.write.zone'='PST');
> 
> To set UTC as default timezone table property on new tables created, use 
> this: 
>   set parquet.mr.int96.enable.utc.write.zone=true;
>   create table tbl2 (ts timestamp) stored as parquet;
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
> b27b663b94f41a8250b79139ed9f7275b10cf9a3 
>   data/files/impala_int96_timestamp.parq PRE-CREATION 
>   
> itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java
>  a14b7900afb00a7d304b0dc4f6482a2b87716919 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
> adabe70fa8f0fe1b990c6ac578a14ff5af06fc93 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java
>  379a9135d9c631b2f473976b00f3dc87f9fec0c4 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java 
> 167f9b6516ac093fa30091daf6965de25e3eccb3 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
> 76d93b8e02a98c95da8a534f2820cd3e77b4bb43 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
>  604cbbcc2a9daa8594397e315cc4fd8064cc5005 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java
>  ac430a67682d3dcbddee89ce132fc0c1b421e368 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetTableUtils.java 
> PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/timestamp/NanoTimeUtils.java 
> 3fd75d24f3fda36967e4957e650aec19050b22f8 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
>  b6a1a7a64db6db0bf06d2eea70a308b88f06156e 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
>  3d5c6e6a092dd6a0303fadc6a244dad2e31cd853 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java
>  f4621e5dbb81e8d58c4572c901ec9d1a7ca8c012 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
>  6b7b50a2

[jira] [Created] (HIVE-15889) LLAP: Some tasks still run after hive cli is shutdown

2017-02-13 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HIVE-15889:
---

 Summary: LLAP: Some tasks still run after hive cli is shutdown
 Key: HIVE-15889
 URL: https://issues.apache.org/jira/browse/HIVE-15889
 Project: Hive
  Issue Type: Bug
Reporter: Rajesh Balamohan


E.g: In cross product case, the tight loop in merge join operator ignores any 
interrupt or abort flag checks, causing the tasks to be in running state even 
when the client cli is quit.

intensionally written cross product query to simulate this.
{noformat}
hive> select count(1) from lineitem, orders;;
{noformat}

Even when the cli is quit, LLAP would continue executing the task for quite 
sometime. E.g stack trace

{noformat}
"TezTaskRunner" #1945 daemon prio=5 os_prio=0 tid=0x7fe9e43a5000 nid=0x4c8 
runnable [0x7fc8d881b000]
   java.lang.Thread.State: RUNNABLE
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:453)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.mergeJoinComputeKeys(CommonMergeJoinOperator.java:603)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.process(CommonMergeJoinOperator.java:207)
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:351)
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:282)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.fetchOneRow(CommonMergeJoinOperator.java:410)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.fetchNextGroup(CommonMergeJoinOperator.java:381)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.doFirstFetchIfNeeded(CommonMergeJoinOperator.java:491)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.process(CommonMergeJoinOperator.java:209)
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:351)
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:282)
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:319)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15888) PPD optimizer failed when query has the same alias with subquery

2017-02-13 Thread Walter Wu (JIRA)
Walter Wu created HIVE-15888:


 Summary: PPD optimizer failed when query has the same alias with 
subquery
 Key: HIVE-15888
 URL: https://issues.apache.org/jira/browse/HIVE-15888
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer
Affects Versions: 1.2.1
Reporter: Walter Wu


Example  :
select * 
from dpdim_employee_org_d c 
join 
(
select a.* from dpmid_md_organization a
left outer join dpmid_md_organization b 
on a.organizationid = b.superiororganizationid and b.hisdate = '2016-10-05'
where a.hisdate = '2016-09-05'
and b.organizationid is null 
) b 
on c.org_id = b.organizationid 
and c.hp_cal_dt = '2016-09-05' limit 10;

Description:
when ppd optimize is enabled this query has empty result . If we unenabled ppd 
optimize or we replace the subquery alias 'b' with 'b1' , this query will work 
correctly.
I explain this query and find that after ppd optimize Filter Operator predicate 
conf changed from 'predicate: superiororganizationid is not null (type: 
boolean)' to 'predicate: false (type: boolean)'.
The subquery has a filter predicate conf 'b.organizationid is 
null','b.organizationid' should equal to 'b:b.organizationid' . The outer query 
has a filter predicate conf 'b.organizationid is not null', 'b.organizationid' 
should equal to 'b:a.organizationid'. While rowSchema get Column Info on 
tabAlias:'b' and alias:'organizationid'. ppd optimize combine 'b.organizationid 
is not null' and  'b.organizationid is null' to 'constant false' . 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[hpl/sql] HIVE-15849 need your review and give suggestions, thanks

2017-02-13 Thread Hui Fei
hi all
I used to test hplsql, and found problems.
Tests come from http://www.hplsql.org/udf

query is SELECT hello(name) FROM users;

I get the empy string in result , that is not expected.

Starting pre-SQL statement
Starting pre-SQL statement
Starting pre-SQL statement
Starting pre-SQL statement
Starting pre-SQL statement
Starting query
Query executed successfully (2.30 sec)
Ln:8 SELECT completed successfully
Ln:8 Standalone SELECT executed: 1 columns in the result set
Hello, !
Hello, !

After fixing it, i get the right result

Starting pre-SQL statement
Starting pre-SQL statement
Starting pre-SQL statement
Starting pre-SQL statement
Starting pre-SQL statement
Starting query
Query executed successfully (2.35 sec)
Ln:8 SELECT completed successfully
Ln:8 Standalone SELECT executed: 1 columns in the result set
Hello, fei!
Hello, fei!

JIRA link is https://issues.apache.org/jira/browse/HIVE-15849, there
is the detailed
description

could anyone please review it and give suggestions, hplsql experts?  thanks