Re: Review Request 51435: HIVE-14426 Extensive logging on info level in WebHCat

2016-08-30 Thread Peter Vary


> On Aug. 27, 2016, 3:20 a.m., Chaoyu Tang wrote:
> > common/src/java/org/apache/hadoop/hive/conf/HiveConfUtil.java, line 72
> > 
> >
> > hive.conf.hidden.list is usually not default specified in 
> > configurations (e.g. that from hdfs-site.xml) other than HiveConf, so the 
> > hiddenListStr should be retruned as null for these configuraitons, and the 
> > properties listed in HiveConf hidden list are still not able to be stripped 
> > from this Config, right?
> > The S3 credentials I think could be defined in the hdfs configuration 
> > but they are specified in HiveConf hiddenList, they should be stripped and 
> > have we tested them?
> 
> Peter Vary wrote:
> Thanks Chaoyu!
> 
> You are absolutely right!
> I missed this, because when it was not defined in the actual 
> configuration xml-s, then it used the default value - so removed the S3 
> credentials anyway.
> 
> Added it to the interestingPropNames, so it will be copied from the 
> HiveConf, if there is any. If it is not in the following files, or the 
> hiveConf, then we could still use the default value.
> HCatalog specific config files: "core-default.xml", "core-site.xml", 
> "mapred-default.xml", "mapred-site.xml", "hdfs-site.xml", 
> "webhcat-default.xml", "webhcat-site.xml"
> 
> Thanks for the catch.
> 
> Any other idea? I might have missed something more - hcatalog is new for 
> me.
> 
> Thanks,
> Peter
> 
> Chaoyu Tang wrote:
> Hi Peter,
> Here is what I understand the code:
> AppConf loads the configurations from xml files specified in 
> HADOOP_CONF_FILENAMES and TEMPLETON_CONF_FILENAMES which does not include 
> hive-site.xml, so it might contain the proprety such as S3 credentials which 
> are probably specified in hdfs-site.xml. Since appConf does not defined the 
> property hive.conf.hidden.list, so 
> the line String hiddenListStr = HiveConf.getVar(configuration, 
> HiveConf.ConfVars.HIVE_CONF_HIDDEN_LIST);
> returns null when HiveConfUtil.dumpConfig(this, sb) is called in 
> AppConf.dumpEnvironent and the passed in parameter configuration is AppConf 
> instance in this case. Therefore the S3 credentials in this AppConf could not 
> be stripped.
> The new change only adds (or appends) the hiveConf hive.conf.hidden.list 
> key/value pair to the value of property HIVE_PROPS_NAME 
> (templeton.hive.properties), but the HIVE_CONF_HIDDEN_LIST is still undefined 
> in AppConf and the property like S3 credentials are still not stripped even 
> they have been specified in hiveConf hidden list. Does it make sense? So in 
> order to strip the properties speified in hive.conf.hidden.list, the property 
> hive.conf.hidden.list should be added to AppConf, right?

Hi Chaoyu,

When starting the Main.java from Intellij in debug mode, I found the following:
- AppConf loads, as you wrote the HADOOP_CONF_FILENAMES and 
TEMPLETON_CONF_FILENAMES - no hive-site.xml here
- If the hive.conf.hidden.list is in any of the files above, then it is used 
(not in the default case - probably not a good idea to have this value there 
anyway)
- If my understanding is correct, if HCatalog uses Hive Metastore, then it 
expects to have hive-site.xml on classpath - see the comments of the 
AppConfig.handleHiveProperties method - if this is true, then we are ok, and 
the hive.conf.hidden.list is set now, and the new patch (with changes after 
your previous comment) will copy the value to the configuration, and the 
credentials are removed
- If hive-site.xml is not on the classpath, then I am not sure what will happen 
when we try to create a HiveConf object in the first line of the 
handleHiveProperties method, but if no exception is thrown, then the 
HiveConf.getVar(configuration, HiveConf.ConfVars.HIVE_CONF_HIDDEN_LIST) will 
return the default value defined in HiveConf object - at least this is what I 
saw during my tests.

Most probably you know the HCatalog better than me, do you think it is a valid 
usecase not to have a hive-conf.xml on the classpath? If so, what should be our 
solution here? Adding a hive.conf. variables to the webhcat-site.xml files? Add 
a new templeton specific configuration variable?

Thanks for your time, and review,
Peter


- Peter


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51435/#review147068
---


On Aug. 30, 2016, 6:34 a.m., Peter Vary wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51435/
> ---
> 
> (Updated Aug. 30, 2016, 6:34 a.m.)
> 
> 
> Review request for hive, Chaoyu Tang, Gabor Szadovszky, Sergio Pena, and 
> Barna Zsombor Klara.
> 
>

Re: Review Request 51435: HIVE-14426 Extensive logging on info level in WebHCat

2016-08-30 Thread Peter Vary


> On Aug. 31, 2016, 1:48 a.m., Chaoyu Tang wrote:
> > hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java,
> >  line 488
> > 
> >
> > I missed this in last review.
> > Utilities class is form package ql and packaged in hive-exec.jar, I am 
> > not sure currently if this jar is mandatory to WebHCat server or not in 
> > runtime. If not, will that mean we need an additional runtime jar 
> > (hive-exec.jar)?
> > Is it better to refactor the maskIfPassword from ql to common package?

I did not have to add any new dependency to the pom.xml - the Utilities was 
just there :)

So I checked it now, and it is coming from hive-hcatalog-core, which is 
directly depending on it.
- If nothing else, you are right, that we should change the pom.xml of svr, to 
reflect the direct dependency now.
- Or we could refactor it from the ql package as you suggested.

What is your suggestion? Which direction should we move?

Thanks,
Peter


- Peter


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51435/#review147400
---


On Aug. 30, 2016, 6:34 a.m., Peter Vary wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51435/
> ---
> 
> (Updated Aug. 30, 2016, 6:34 a.m.)
> 
> 
> Review request for hive, Chaoyu Tang, Gabor Szadovszky, Sergio Pena, and 
> Barna Zsombor Klara.
> 
> 
> Bugs: HIVE-14426
> https://issues.apache.org/jira/browse/HIVE-14426
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Used HiveConf.stripHiddenConfigurations to remove sensitive information from 
> configuration dump. Had to refactor, so it could be applied to simple 
> Configuration objects too, like AppConfig.
> 
> Used Utilities.maskIfPassword to remove sensitive information from property 
> dump
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 14a538b 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConfUtil.java 073a978 
>   
> hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
>  dd1208b 
>   
> hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
>  201e647 
> 
> Diff: https://reviews.apache.org/r/51435/diff/
> 
> 
> Testing
> ---
> 
> Manually - checked both
> 
> 
> Thanks,
> 
> Peter Vary
> 
>



Re: Review Request 51435: HIVE-14426 Extensive logging on info level in WebHCat

2016-08-30 Thread Chaoyu Tang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51435/#review147400
---




hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
 (line 488)


I missed this in last review.
Utilities class is form package ql and packaged in hive-exec.jar, I am not 
sure currently if this jar is mandatory to WebHCat server or not in runtime. If 
not, will that mean we need an additional runtime jar (hive-exec.jar)?
Is it better to refactor the maskIfPassword from ql to common package?


- Chaoyu Tang


On Aug. 30, 2016, 6:34 a.m., Peter Vary wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51435/
> ---
> 
> (Updated Aug. 30, 2016, 6:34 a.m.)
> 
> 
> Review request for hive, Chaoyu Tang, Gabor Szadovszky, Sergio Pena, and 
> Barna Zsombor Klara.
> 
> 
> Bugs: HIVE-14426
> https://issues.apache.org/jira/browse/HIVE-14426
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Used HiveConf.stripHiddenConfigurations to remove sensitive information from 
> configuration dump. Had to refactor, so it could be applied to simple 
> Configuration objects too, like AppConfig.
> 
> Used Utilities.maskIfPassword to remove sensitive information from property 
> dump
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 14a538b 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConfUtil.java 073a978 
>   
> hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
>  dd1208b 
>   
> hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
>  201e647 
> 
> Diff: https://reviews.apache.org/r/51435/diff/
> 
> 
> Testing
> ---
> 
> Manually - checked both
> 
> 
> Thanks,
> 
> Peter Vary
> 
>



[jira] [Created] (HIVE-14675) Ensure directories are cleaned up on test cleanup in QTestUtil

2016-08-30 Thread Siddharth Seth (JIRA)
Siddharth Seth created HIVE-14675:
-

 Summary: Ensure directories are cleaned up on test cleanup in 
QTestUtil
 Key: HIVE-14675
 URL: https://issues.apache.org/jira/browse/HIVE-14675
 Project: Hive
  Issue Type: Sub-task
Reporter: Siddharth Seth


Need to verify whether they are cleaned up or not. There's 4-5 different 
directories involved. If I'm not mistaken, they get cleaned up before each test 
invocation via mvn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 51435: HIVE-14426 Extensive logging on info level in WebHCat

2016-08-30 Thread Chaoyu Tang


> On Aug. 27, 2016, 3:20 a.m., Chaoyu Tang wrote:
> > common/src/java/org/apache/hadoop/hive/conf/HiveConfUtil.java, line 72
> > 
> >
> > hive.conf.hidden.list is usually not default specified in 
> > configurations (e.g. that from hdfs-site.xml) other than HiveConf, so the 
> > hiddenListStr should be retruned as null for these configuraitons, and the 
> > properties listed in HiveConf hidden list are still not able to be stripped 
> > from this Config, right?
> > The S3 credentials I think could be defined in the hdfs configuration 
> > but they are specified in HiveConf hiddenList, they should be stripped and 
> > have we tested them?
> 
> Peter Vary wrote:
> Thanks Chaoyu!
> 
> You are absolutely right!
> I missed this, because when it was not defined in the actual 
> configuration xml-s, then it used the default value - so removed the S3 
> credentials anyway.
> 
> Added it to the interestingPropNames, so it will be copied from the 
> HiveConf, if there is any. If it is not in the following files, or the 
> hiveConf, then we could still use the default value.
> HCatalog specific config files: "core-default.xml", "core-site.xml", 
> "mapred-default.xml", "mapred-site.xml", "hdfs-site.xml", 
> "webhcat-default.xml", "webhcat-site.xml"
> 
> Thanks for the catch.
> 
> Any other idea? I might have missed something more - hcatalog is new for 
> me.
> 
> Thanks,
> Peter

Hi Peter,
Here is what I understand the code:
AppConf loads the configurations from xml files specified in 
HADOOP_CONF_FILENAMES and TEMPLETON_CONF_FILENAMES which does not include 
hive-site.xml, so it might contain the proprety such as S3 credentials which 
are probably specified in hdfs-site.xml. Since appConf does not defined the 
property hive.conf.hidden.list, so 
the line String hiddenListStr = HiveConf.getVar(configuration, 
HiveConf.ConfVars.HIVE_CONF_HIDDEN_LIST);
returns null when HiveConfUtil.dumpConfig(this, sb) is called in 
AppConf.dumpEnvironent and the passed in parameter configuration is AppConf 
instance in this case. Therefore the S3 credentials in this AppConf could not 
be stripped.
The new change only adds (or appends) the hiveConf hive.conf.hidden.list 
key/value pair to the value of property HIVE_PROPS_NAME 
(templeton.hive.properties), but the HIVE_CONF_HIDDEN_LIST is still undefined 
in AppConf and the property like S3 credentials are still not stripped even 
they have been specified in hiveConf hidden list. Does it make sense? So in 
order to strip the properties speified in hive.conf.hidden.list, the property 
hive.conf.hidden.list should be added to AppConf, right?


- Chaoyu


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51435/#review147068
---


On Aug. 30, 2016, 6:34 a.m., Peter Vary wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51435/
> ---
> 
> (Updated Aug. 30, 2016, 6:34 a.m.)
> 
> 
> Review request for hive, Chaoyu Tang, Gabor Szadovszky, Sergio Pena, and 
> Barna Zsombor Klara.
> 
> 
> Bugs: HIVE-14426
> https://issues.apache.org/jira/browse/HIVE-14426
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Used HiveConf.stripHiddenConfigurations to remove sensitive information from 
> configuration dump. Had to refactor, so it could be applied to simple 
> Configuration objects too, like AppConfig.
> 
> Used Utilities.maskIfPassword to remove sensitive information from property 
> dump
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 14a538b 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConfUtil.java 073a978 
>   
> hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
>  dd1208b 
>   
> hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
>  201e647 
> 
> Diff: https://reviews.apache.org/r/51435/diff/
> 
> 
> Testing
> ---
> 
> Manually - checked both
> 
> 
> Thanks,
> 
> Peter Vary
> 
>



[jira] [Created] (HIVE-14674) Incorrect syntax near the keyword 'with' using MS SQL Server

2016-08-30 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-14674:
-

 Summary:  Incorrect syntax near the keyword 'with' using MS SQL 
Server
 Key: HIVE-14674
 URL: https://issues.apache.org/jira/browse/HIVE-14674
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Transactions
Affects Versions: 2.1.0, 1.3.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
Priority: Critical


addForUpdateClause() in TxnHandler incorrectly handles queries with WHERE clause





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14673) Orc orc_merge_diff_fs.q and orc_llap.q test improvement

2016-08-30 Thread Prasanth Jayachandran (JIRA)
Prasanth Jayachandran created HIVE-14673:


 Summary: Orc orc_merge_diff_fs.q and orc_llap.q test improvement
 Key: HIVE-14673
 URL: https://issues.apache.org/jira/browse/HIVE-14673
 Project: Hive
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 2.2.0
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran


orc_merge_diff_fs.q and orc_llap.q are slow (350.487s and 290.877s 
respectively). We can move orc_merge_diff_fs.q to MiniLlap as we are testing 
merge across filesystems and there are several orc merge tests for mr. 
orc_llap.q seems to be creating a lot of temp tables and running sum(hash(*)) 
againsts which seems to be slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14672) Add timestamps to startup message in hive scripts.

2016-08-30 Thread Naveen Gangam (JIRA)
Naveen Gangam created HIVE-14672:


 Summary: Add timestamps to startup message in hive scripts.
 Key: HIVE-14672
 URL: https://issues.apache.org/jira/browse/HIVE-14672
 Project: Hive
  Issue Type: Improvement
  Components: Hive
Affects Versions: 2.0.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam
Priority: Minor


if hive services are started as background processes (most certainly the case) 
with output from stdout and stderr redirected to a file, certain fatal errors, 
like OutOfMemoryErrors, are printed to stderr. Such errors often do not have a 
timestamp associated with it. So overtime, after multiple restarts of the hive 
processes, the stdout/err redirect log looks like this
{code}
Starting Hive Metastore Server
Starting Hive Metastore Server
Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
exceeds VM limit
Starting Hive Metastore Server
{code}

It would be useful to have a timestamp associated with the "Starting Hive 
Metastore Server" message to help debug when such errors may have occurred so 
we could look in the corresponding logs to help narrow down the issue.
 
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 51312: HIVE-14589 add consistent node replacement to LLAP for splits

2016-08-30 Thread Siddharth Seth

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51312/#review147377
---



Unrelated to this patch - Any idea why 'uniq' is static/ (private static final 
UUID uniq = UUID.randomUUID();)

- Siddharth Seth


On Aug. 23, 2016, 1:41 a.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51312/
> ---
> 
> (Updated Aug. 23, 2016, 1:41 a.m.)
> 
> 
> Review request for hive and Prasanth_J.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> see jira
> 
> 
> Diffs
> -
> 
>   
> llap-client/src/java/org/apache/hadoop/hive/llap/registry/ServiceInstanceSet.java
>  99ead9b 
>   
> llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapFixedRegistryImpl.java
>  e9456f2 
>   
> llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapZookeeperRegistryImpl.java
>  64d2617 
>   
> llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/SlotZnode.java 
> PRE-CREATION 
>   
> llap-client/src/java/org/apache/hadoop/hive/llap/security/LlapTokenClient.java
>  921e050 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapStatusServiceDriver.java
>  17ce69b 
>   
> llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskSchedulerService.java
>  efd774d 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HostAffinitySplitLocationProvider.java
>  c06499e 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/Utils.java 8a4fc08 
>   
> ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
>  d98a5ff 
> 
> Diff: https://reviews.apache.org/r/51312/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sergey Shelukhin
> 
>



Re: Review Request 51312: HIVE-14589 add consistent node replacement to LLAP for splits

2016-08-30 Thread Siddharth Seth

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51312/#review147362
---



Possible to add a test (or a few)? (I don' see one in the patch attached to the 
jira).

Question: How long does it take for a persistent node to go away - in case of a 
JVM crash / node crash. Is it possible that a new process starts up within the 
duration it takes for the old node-slot entry to be removed? (It gets added to 
the end as a result)

In terms of a permanent cluster size reduction - I think we should at least 
take some steps to handle this scenario, or an equivalent scenario where it 
takes a long time (minutes) for a node to come back. We have a force locality 
scheduling mode, this will effectively cause new queries to start in a state 
where they cannot complete. Check for this and fail the query?, or fallback to 
a next node for the split. The fallback to the next node is something that is 
going to come in soon anyway, to control locality if nodes are not available 
(current is always random as against 'random' being a configurable option :) )


llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapZookeeperRegistryImpl.java
 (line 104)


Move this into a separate path altogether, instead of listing and filtering 
at the same path each time?



llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapZookeeperRegistryImpl.java
 (line 110)


woekersPath is no long defined, so the comment can be deleted.



llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapZookeeperRegistryImpl.java
 (line 533)


There's a lot of similar code to read records within a loop, and a method 
to read records. Move into a single method? Can be done in a separate jiras as 
well.



llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapZookeeperRegistryImpl.java
 (line 547)


The size of this collection is used to determine the #knownWorkers in 
HostAffinitySplitLocationProvider. The size at the moment represents active 
nodes.

I don't think the intent is to return a fewer number of nodes than what 
existed if a node goes down? 
1) Return n entries, and one entry would indicate it's not active (and this 
will need to be acted upon by all clients
2) Add a getNumInstances itnerface, which can differ from actual nodes 
available, along with a getNodeAtLocation interface?



llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapZookeeperRegistryImpl.java
 (line 554)


Here, as well as other places, a node is only available when it's slot has 
been registered. Otherwise it should not be visible to clients.



llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapZookeeperRegistryImpl.java
 (line 577)


Don't send back a node until it's slot registration has completed. We don't 
know what position it will take. This logic puts such nodes at the end.



llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/SlotZnode.java 
(line 51)


Are there tests in curator for the said node, which we could borrow parts 
from?



llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/SlotZnode.java 
(line 128)


Nit: else {
  slots.add( ..)
}

Makes it a little more readable, and maintanable for future changes.



llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/SlotZnode.java 
(line 138)


Potential divide by 0? (new cluster)

What's the purpose of the 2.0f/approxWorkerCount ?



ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
 (line 94)


This isn't part of the patch uploaded to jira?


- Siddharth Seth


On Aug. 23, 2016, 1:41 a.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51312/
> ---
> 
> (Updated Aug. 23, 2016, 1:41 a.m.)
> 
> 
> Review request for hive and Prasanth_J.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> see jira
> 
> 
> Diffs
> -
> 
>   
> llap-client/src/java/org/apache/hadoop/hive/llap/registry/ServiceInstanceSet.java
>  99ead9b 
>   
> llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapFixedRegistryImpl.java
>  e9456f2 
>   
>

[jira] [Created] (HIVE-14671) merge master into hive-14535

2016-08-30 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-14671:
---

 Summary: merge master into hive-14535
 Key: HIVE-14671
 URL: https://issues.apache.org/jira/browse/HIVE-14671
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 51542: HIVE-14652 incorrect results for not in on partition columns

2016-08-30 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51542/
---

(Updated Aug. 30, 2016, 9:56 p.m.)


Review request for hive and Jesús Camacho Rodríguez.


Repository: hive-git


Description
---

see jira


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java 
f9388e2 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java b2125ca 
  ql/src/test/queries/clientpositive/partition_condition_remover.q PRE-CREATION 
  ql/src/test/results/clientpositive/partition_condition_remover.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/pcs.q.out 0045c1c 

Diff: https://reviews.apache.org/r/51542/diff/


Testing
---


Thanks,

Sergey Shelukhin



Review Request 51542: HIVE-14652 incorrect results for not in on partition columns

2016-08-30 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51542/
---

Review request for hive and Jesús Camacho Rodríguez.


Summary (updated)
-

HIVE-14652  incorrect results for not in on partition columns


Repository: hive-git


Description (updated)
---

see jira


Diffs
-


Diff: https://reviews.apache.org/r/51542/diff/


Testing
---


Thanks,

Sergey Shelukhin



[jira] [Created] (HIVE-14670) org.apache.hadoop.hive.ql.TestMTQueries failure

2016-08-30 Thread Hari Sankar Sivarama Subramaniyan (JIRA)
Hari Sankar Sivarama Subramaniyan created HIVE-14670:


 Summary: org.apache.hadoop.hive.ql.TestMTQueries failure
 Key: HIVE-14670
 URL: https://issues.apache.org/jira/browse/HIVE-14670
 Project: Hive
  Issue Type: Sub-task
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan


Introduced by HIVE-14627. We used to have a common q_init file for MR and 
CliDriver tests till HIVE-14627 was committed. Now, that the init files are 
separate and join1.q and groupby2.q are run as part of minimr tests, we cannot 
use these tests to test multi-threaded queries with the same setup file because 
they would result in different stats (due to the way the init scripts are 
written). The easy fix would  be to substitute join1.q and groupby2.q with 2 
files that actually run in CliDriver mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: More than one table created at the same location

2016-08-30 Thread Thejas Nair
Naveen,
Can you please verify if you create these tables as external tables the
results are correct ?
In case of managed tables, the assumption is that there is a 1:1 mapping
between tables and the locations and all update to the table are through
hive. With that assumption, it relies on stats to return results in queries
like count(*) .


On Tue, Aug 30, 2016 at 4:18 AM, Abhishek Somani  wrote:

> For the 2nd table(after both inserts are over), isn't the return count
> expected to be 4? In that case, isn't the the bug that the count was
> returned wrong(maybe from the stats as mentioned) rather the fact that
> another table was allowed to be created at the same location?
>
> I might be very wrong, so pardon my ignorance.
>
> On Tue, Aug 30, 2016 at 3:06 AM, Alan Gates  wrote:
>
> > Note that Hive doesn’t track individual files, just which directory a
> > table stores its files in.  So we wouldn’t expect this to work.  The bug
> is
> > more that Hive doesn’t detect that two tables are trying to use the same
> > directory.  I’m not sure we’re anxious to fix this since it would mean
> when
> > creating a table Hive would need to search all existing tables to make
> sure
> > none of them are using the directory the new table wants to use.
> >
> > Alan.
> >
> > > On Aug 30, 2016, at 04:17, Sergey Shelukhin 
> > wrote:
> > >
> > > This is a bug, or rather an unexpected usage. I suspect the correct
> count
> > > value is coming from statistics.
> > > Can you file a JIRA?
> > >
> > > On 16/8/29, 00:51, "naveen mahadevuni"  wrote:
> > >
> > >> Hi,
> > >>
> > >> Is the following behavior a bug? I believe at least one part of it is
> a
> > >> bug. I created two Hive tables at the same location and inserted rows
> in
> > >> two tables. count(*) returns the correct count for each individual
> > table,
> > >> but SELECT * on one tables reads the rows from other table files too.
> > >>
> > >> CREATE TABLE test1 (col1 INT, col2 INT)
> > >> stored as orc
> > >> LOCATION '/apps/hive/warehouse/test1';
> > >>
> > >> insert into test1 values(1,2);
> > >> insert into test1 values(3,4);
> > >>
> > >> hive> select count(*) from test1;
> > >> OK
> > >> 2
> > >> Time taken: 0.177 seconds, Fetched: 1 row(s)
> > >>
> > >>
> > >> CREATE TABLE test2 (col1 INT, col2 INT)
> > >> stored as orc
> > >> LOCATION '/apps/hive/warehouse/test1';
> > >>
> > >> insert into test2 values(1,2);
> > >> insert into test2 values(3,4);
> > >>
> > >> hive> select count(*) from test2;
> > >> OK
> > >> 2
> > >> Time taken: 2.683 seconds, Fetched: 1 row(s)
> > >>
> > >> -- SELECT * fetches 4 records where as COUNT(*) above returns count of
> > 2.
> > >>
> > >> hive> select * from test2;
> > >> OK
> > >> 1   2
> > >> 3   4
> > >> 1   2
> > >> 3   4
> > >> Time taken: 0.107 seconds, Fetched: 4 row(s)
> > >> hive> select * from test1;
> > >> OK
> > >> 1   2
> > >> 3   4
> > >> 1   2
> > >> 3   4
> > >> Time taken: 0.054 seconds, Fetched: 4 row(s)
> > >>
> > >> Thanks,
> > >> Naveen
> > >
> >
> >
>


[jira] [Created] (HIVE-14669) Have the actual error reported when a q test fails instead of having to go through the logs

2016-08-30 Thread Siddharth Seth (JIRA)
Siddharth Seth created HIVE-14669:
-

 Summary: Have the actual error reported when a q test fails 
instead of having to go through the logs
 Key: HIVE-14669
 URL: https://issues.apache.org/jira/browse/HIVE-14669
 Project: Hive
  Issue Type: Sub-task
Reporter: Siddharth Seth


QTest runs end up invoking CliDriver.processLine. This, in most cases, reports 
a numeric exit code - 0 for success. Non-zero for various different error types 
(which are defined everywhere in the code).

Internally CliDriver does have more information via CommandResult. A lot of 
this is not exposed though. That's alright for the end user cli - (Command line 
tool translating the error to a code and message). However, it makes debugging 
very difficult for QTests - since the log needs to be looked at each time.

Errors generated by the actual backend execution are mostly available to the 
client, and information about these should show up as well. (If it doesn't - we 
have a usability issues to fix).

cc [~ekoifman]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14668) TestMiniLlapCliDriver. hybridgrace_hashjoin_1 - the first attempt for a couple of tasks fails

2016-08-30 Thread Siddharth Seth (JIRA)
Siddharth Seth created HIVE-14668:
-

 Summary: TestMiniLlapCliDriver. hybridgrace_hashjoin_1 - the first 
attempt for a couple of tasks fails
 Key: HIVE-14668
 URL: https://issues.apache.org/jira/browse/HIVE-14668
 Project: Hive
  Issue Type: Sub-task
Reporter: Siddharth Seth
Priority: Critical


https://issues.apache.org/jira/browse/HIVE-14651 tried changing the max 
attempted task attempts to 1 (i.e. fail fast if a task attempt fails).

This caused TestMiniLlapCliDriver. hybridgrace_hashjoin_1.
Turns out there's a first attempt which fails when running some queries in this 
q file. The same attempt succeeds on the next task attempt.

{code}
2016-08-30T12:17:21,430 ERROR [TezTaskRunner 
(1472584599354_0001_43_01_00_0)] tez.MapRecordSource: Failed to close the 
reader; ignoring
java.io.IOException: java.lang.AssertionError
  at 
org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.rethrowErrorIfAny(LlapInputFormat.java:383)
 ~[hive-llap-server-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.close(LlapInputFormat.java:374)
 ~[hive-llap-server-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.hadoop.hive.ql.io.HiveRecordReader.doClose(HiveRecordReader.java:50) 
~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.close(HiveContextAwareRecordReader.java:104)
 ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.close(TezGroupedSplitsInputFormat.java:177)
 ~[tez-mapreduce-0.8.local.jar:0.8.local]
  at org.apache.tez.mapreduce.lib.MRReaderMapred.close(MRReaderMapred.java:99) 
~[tez-mapreduce-0.8.local.jar:0.8.local]
  at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.closeReader(MapRecordSource.java:109)
 ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:73)
 ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:391)
 ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
 ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168) 
~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
 ~[tez-runtime-internals-0.8.local.jar:0.8.local]
  at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
 ~[tez-runtime-internals-0.8.local.jar:0.8.local]
  at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
 ~[tez-runtime-internals-0.8.local.jar:0.8.local]
  at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_60]
  at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_60]
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 ~[hadoop-common-2.7.2.jar:?]
  at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
 ~[tez-runtime-internals-0.8.local.jar:0.8.local]
  at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
 ~[tez-runtime-internals-0.8.local.jar:0.8.local]
  at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) 
~[tez-common-0.8.local.jar:0.8.local]
  at 
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
 ~[hive-llap-server-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_60]
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_60]
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_60]
  at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
Caused by: java.lang.AssertionError
  at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:398)
 ~[hive-llap-server-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:227)
 ~[hive-llap-server-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:224)
 ~[hive-llap-server-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
  at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_60]
  at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_60]
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 ~[hadoop-common-2

Re: Review Request 51046: Support explain analyze in Hive

2016-08-30 Thread pengcheng xiong


> On Aug. 26, 2016, 11:44 p.m., Gabor Szadovszky wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java, line 132
> > 
> >
> > nit: Seems to break the previous commenting structure by adding the two 
> > parameters in one line. Might be better to separate them to two lines and 
> > add some comment for config (if it make sense).

move them to 2 lines, add "config, //explainConfig"


> On Aug. 26, 2016, 11:44 p.m., Gabor Szadovszky wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AnnotateRunTimeStatsOptimizer.java,
> >  line 57
> > 
> >
> > I would recommend having the Logger instance private as it is created 
> > specifically for the actual class. (BTW: .getName() is not required as 
> > LoggerFactory has a getLogger(Class) as well)

change it to "private static final Logger LOG = 
LoggerFactory.getLogger(AnnotateRunTimeStatsOptimizer.class);"


- pengcheng


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51046/#review147033
---


On Aug. 26, 2016, 8:28 p.m., pengcheng xiong wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51046/
> ---
> 
> (Updated Aug. 26, 2016, 8:28 p.m.)
> 
> 
> Review request for hive and Ashutosh Chauhan.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-14362
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/common/StatsSetupConst.java 559fffc 
>   itests/src/test/resources/testconfiguration.properties dfde5e2 
>   ql/src/java/org/apache/hadoop/hive/ql/Context.java 3785b1e 
>   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 183ed82 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsTask.java a183b9b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java 43231af 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java a59b781 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java ad48091 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java b0c3d3f 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java 47b5793 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/JoinOperator.java 08cc4b4 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/LimitOperator.java 9676d70 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ListSinkOperator.java 9bf363c 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java 546919b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java eaf4792 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java ba71a1e 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/SerializationUtilities.java 
> 42c1003 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/StatsTask.java e1f7bd9 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java 6afe957 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/UDTFOperator.java a75b52a 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java 742edc8 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java 5ee54b9 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AnnotateRunTimeStatsOptimizer.java
>  PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/PhysicalOptimizer.java
>  49706b1 
>   
> ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsAutoGatherContext.java 
> 15a47dc 
>   
> ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java 
> d3aef41 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/ExplainConfiguration.java 
> PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/parse/ExplainSQRewriteSemanticAnalyzer.java
>  8d7fd92 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/ExplainSemanticAnalyzer.java 
> 75753b0 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java 6715dbf 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g c411f5e 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/MapReduceCompiler.java 5b08ed2 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 66589fe 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryDiagnostic.java 57f9432 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java 114fa2f 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java 66a8322 
>   
> ql/src/java/org/apache/hadoop/hive/ql/parse/UpdateDeleteSemanticAnalyzer.java 
> 33fbffe 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 
> 08278de 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/AbstractOperatorDesc.java 
> adec5c7 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/ExplainWork.java a213c83 
>   ql/src/java/org/apache/hadoop/h

ApacheCon Seville CFP closes September 9th

2016-08-30 Thread Rich Bowen
It's traditional. We wait for the last minute to get our talk proposals
in for conferences.

Well, the last minute has arrived. The CFP for ApacheCon Seville closes
on September 9th, which is less than 2 weeks away. It's time to get your
talks in, so that we can make this the best ApacheCon yet.

It's also time to discuss with your developer and user community whether
there's a track of talks that you might want to propose, so that you
have more complete coverage of your project than a talk or two.

For Apache Big Data, the relevant URLs are:
Event details:
http://events.linuxfoundation.org/events/apache-big-data-europe
CFP:
http://events.linuxfoundation.org/events/apache-big-data-europe/program/cfp

For ApacheCon Europe, the relevant URLs are:
Event details: http://events.linuxfoundation.org/events/apachecon-europe
CFP: http://events.linuxfoundation.org/events/apachecon-europe/program/cfp

This year, we'll be reviewing papers "blind" - that is, looking at the
abstracts without knowing who the speaker is. This has been shown to
eliminate the "me and my buddies" nature of many tech conferences,
producing more diversity, and more new speakers. So make sure your
abstracts clearly explain what you'll be talking about.

For further updated about ApacheCon, follow us on Twitter, @ApacheCon,
or drop by our IRC channel, #apachecon on the Freenode IRC network.

-- 
Rich Bowen
WWW: http://apachecon.com/
Twitter: @ApacheCon


Re: More than one table created at the same location

2016-08-30 Thread Abhishek Somani
For the 2nd table(after both inserts are over), isn't the return count
expected to be 4? In that case, isn't the the bug that the count was
returned wrong(maybe from the stats as mentioned) rather the fact that
another table was allowed to be created at the same location?

I might be very wrong, so pardon my ignorance.

On Tue, Aug 30, 2016 at 3:06 AM, Alan Gates  wrote:

> Note that Hive doesn’t track individual files, just which directory a
> table stores its files in.  So we wouldn’t expect this to work.  The bug is
> more that Hive doesn’t detect that two tables are trying to use the same
> directory.  I’m not sure we’re anxious to fix this since it would mean when
> creating a table Hive would need to search all existing tables to make sure
> none of them are using the directory the new table wants to use.
>
> Alan.
>
> > On Aug 30, 2016, at 04:17, Sergey Shelukhin 
> wrote:
> >
> > This is a bug, or rather an unexpected usage. I suspect the correct count
> > value is coming from statistics.
> > Can you file a JIRA?
> >
> > On 16/8/29, 00:51, "naveen mahadevuni"  wrote:
> >
> >> Hi,
> >>
> >> Is the following behavior a bug? I believe at least one part of it is a
> >> bug. I created two Hive tables at the same location and inserted rows in
> >> two tables. count(*) returns the correct count for each individual
> table,
> >> but SELECT * on one tables reads the rows from other table files too.
> >>
> >> CREATE TABLE test1 (col1 INT, col2 INT)
> >> stored as orc
> >> LOCATION '/apps/hive/warehouse/test1';
> >>
> >> insert into test1 values(1,2);
> >> insert into test1 values(3,4);
> >>
> >> hive> select count(*) from test1;
> >> OK
> >> 2
> >> Time taken: 0.177 seconds, Fetched: 1 row(s)
> >>
> >>
> >> CREATE TABLE test2 (col1 INT, col2 INT)
> >> stored as orc
> >> LOCATION '/apps/hive/warehouse/test1';
> >>
> >> insert into test2 values(1,2);
> >> insert into test2 values(3,4);
> >>
> >> hive> select count(*) from test2;
> >> OK
> >> 2
> >> Time taken: 2.683 seconds, Fetched: 1 row(s)
> >>
> >> -- SELECT * fetches 4 records where as COUNT(*) above returns count of
> 2.
> >>
> >> hive> select * from test2;
> >> OK
> >> 1   2
> >> 3   4
> >> 1   2
> >> 3   4
> >> Time taken: 0.107 seconds, Fetched: 4 row(s)
> >> hive> select * from test1;
> >> OK
> >> 1   2
> >> 3   4
> >> 1   2
> >> 3   4
> >> Time taken: 0.054 seconds, Fetched: 4 row(s)
> >>
> >> Thanks,
> >> Naveen
> >
>
>