[jira] [Commented] (HIVE-6437) DefaultHiveAuthorizationProvider should not initialize a new HiveConf

2014-07-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075924#comment-14075924
 ] 

Lefty Leverenz commented on HIVE-6437:
--

*hive.metastore.force.reload.conf* was introduced in Hive 0.6.0 (default false) 
but isn't documented in the wiki.  Even though it's going to be removed, it 
should probably be documented with version information -- I suggest the Test 
Properties section, not the Metastore section even though it's a 
hive.metastore.* property.

* [Configuration Properties -- Test Properties | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-TestProperties]
* [Configuration Properties -- Metastore | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-MetaStore]

 DefaultHiveAuthorizationProvider should not initialize a new HiveConf
 -

 Key: HIVE-6437
 URL: https://issues.apache.org/jira/browse/HIVE-6437
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Affects Versions: 0.13.0
Reporter: Harsh J
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6437.1.patch.txt, HIVE-6437.2.patch.txt, 
 HIVE-6437.3.patch.txt


 During a HS2 connection, every SessionState got initializes a new 
 DefaultHiveAuthorizationProvider object (on stock configs).
 In turn, DefaultHiveAuthorizationProvider carries a {{new HiveConf(…)}} that 
 may prove too expensive, and unnecessary to do, since SessionState itself 
 sends in a fully applied HiveConf to it in the first place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7507) Altering columns in hive results in classcast exceptions

2014-07-28 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075926#comment-14075926
 ] 

Navis commented on HIVE-7507:
-

Any idea to fix this? IMO, we can start with preventing renaming column of 
partition. Current semantic analyzer uses last modified serde to acquire column 
name/types and it's not true in cases of above example.

 Altering columns in hive results in classcast exceptions
 

 Key: HIVE-7507
 URL: https://issues.apache.org/jira/browse/HIVE-7507
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.13.1
Reporter: Vikram Dixit K

 {code}
 set hive.enforce.bucketing=true;
 set hive.enforce.sorting = true;
 set hive.optimize.bucketingsorting=false;
 set hive.auto.convert.join.noconditionaltask.size=1;
 create table test (key int, value string) partitioned by (p int) clustered by 
 (key) into 2 buckets stored as textfile;
 create table test1 (key int, value string) stored as textfile;
 insert into table test partition (p=1) select * from src;
 alter table test set fileformat orc;
 insert into table test partition (p=2) select * from src;
 insert into table test1 select * from src;
 alter table test CHANGE key k1 int after value;
 insert into table test partition (p=3) select value, key from src;
 set hive.auto.convert.join = true;
 set hive.auto.convert.join.noconditionaltask = true;
 explain
 select test.k1, test.value from test join test1 on (test.k1 = test1.key) 
 order by test.k1;
 select test.k1, test.value from test join test1 on (test.k1 = test1.key) 
 order by test.k1;
 {code}
 {code}
 java.lang.Exception: java.io.IOException: java.io.IOException: 
 java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
 to org.apache.hadoop.io.Text
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
 to org.apache.hadoop.io.Text
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:255)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:170)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:198)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:184)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:695)
 Caused by: java.io.IOException: java.lang.ClassCastException: 
 org.apache.hadoop.io.IntWritable cannot be cast to org.apache.hadoop.io.Text
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
   at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:344)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
   at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:122)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:253)
   ... 13 more
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
 cannot be cast to org.apache.hadoop.io.Text
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1596)
   at 
 

[jira] [Commented] (HIVE-6437) DefaultHiveAuthorizationProvider should not initialize a new HiveConf

2014-07-28 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075931#comment-14075931
 ] 

Navis commented on HIVE-6437:
-

It was needed only for a test case (url_hook.q), which seemed not useful and 
should not be used in any other cases. I think it should be removed from wiki 
rather than make new description.

 DefaultHiveAuthorizationProvider should not initialize a new HiveConf
 -

 Key: HIVE-6437
 URL: https://issues.apache.org/jira/browse/HIVE-6437
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Affects Versions: 0.13.0
Reporter: Harsh J
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6437.1.patch.txt, HIVE-6437.2.patch.txt, 
 HIVE-6437.3.patch.txt


 During a HS2 connection, every SessionState got initializes a new 
 DefaultHiveAuthorizationProvider object (on stock configs).
 In turn, DefaultHiveAuthorizationProvider carries a {{new HiveConf(…)}} that 
 may prove too expensive, and unnecessary to do, since SessionState itself 
 sends in a fully applied HiveConf to it in the first place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf

2014-07-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075938#comment-14075938
 ] 

Lefty Leverenz commented on HIVE-6037:
--

See HIVE-7496.

 Synchronize HiveConf with hive-default.xml.template and support show conf
 -

 Key: HIVE-6037
 URL: https://issues.apache.org/jira/browse/HIVE-6037
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Reporter: Navis
Assignee: Navis
Priority: Minor
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: CHIVE-6037.3.patch.txt, HIVE-6037-0.13.0, 
 HIVE-6037.1.patch.txt, HIVE-6037.10.patch.txt, HIVE-6037.11.patch.txt, 
 HIVE-6037.12.patch.txt, HIVE-6037.14.patch.txt, HIVE-6037.15.patch.txt, 
 HIVE-6037.16.patch.txt, HIVE-6037.17.patch, HIVE-6037.18.patch.txt, 
 HIVE-6037.19.patch.txt, HIVE-6037.19.patch.txt, HIVE-6037.2.patch.txt, 
 HIVE-6037.20.patch.txt, HIVE-6037.4.patch.txt, HIVE-6037.5.patch.txt, 
 HIVE-6037.6.patch.txt, HIVE-6037.7.patch.txt, HIVE-6037.8.patch.txt, 
 HIVE-6037.9.patch.txt, HIVE-6037.patch


 see HIVE-5879



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7496) Exclude conf/hive-default.xml.template in version control and include it dist profile

2014-07-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075939#comment-14075939
 ] 

Lefty Leverenz commented on HIVE-7496:
--

What does this mean for users and administrators?  Will they find 
hive-default.xml.template in the usual place in each released branch?

The wiki discusses the template file in Configuring Hive, and I think we should 
have a version note explaining to developers that they won't find it in trunk 
anymore.  (If we put the information in the developer docs, it's less likely to 
get noticed because there's no context for it.)  Configuring Hive will be 
updated anyway for HIVE-6037 to say that HiveConf.java holds all the parameter 
descriptions starting with 0.14.0, and the template file will be generated from 
HiveConf.java.

* [Configuring Hive | 
https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration#AdminManualConfiguration-ConfiguringHive]

 Exclude conf/hive-default.xml.template in version control and include it dist 
 profile
 -

 Key: HIVE-7496
 URL: https://issues.apache.org/jira/browse/HIVE-7496
 Project: Hive
  Issue Type: Task
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-7496.1.patch.txt, HIVE-7496.2.patch.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7496) Exclude conf/hive-default.xml.template in version control and include it dist profile

2014-07-28 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-7496:
-

Labels: TODOC14  (was: )

 Exclude conf/hive-default.xml.template in version control and include it dist 
 profile
 -

 Key: HIVE-7496
 URL: https://issues.apache.org/jira/browse/HIVE-7496
 Project: Hive
  Issue Type: Task
Reporter: Navis
Assignee: Navis
Priority: Minor
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-7496.1.patch.txt, HIVE-7496.2.patch.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6437) DefaultHiveAuthorizationProvider should not initialize a new HiveConf

2014-07-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075946#comment-14075946
 ] 

Lefty Leverenz commented on HIVE-6437:
--

Okay, it never got documented and it will stay undocumented.  Thanks, [~navis].

 DefaultHiveAuthorizationProvider should not initialize a new HiveConf
 -

 Key: HIVE-6437
 URL: https://issues.apache.org/jira/browse/HIVE-6437
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Affects Versions: 0.13.0
Reporter: Harsh J
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6437.1.patch.txt, HIVE-6437.2.patch.txt, 
 HIVE-6437.3.patch.txt


 During a HS2 connection, every SessionState got initializes a new 
 DefaultHiveAuthorizationProvider object (on stock configs).
 In turn, DefaultHiveAuthorizationProvider carries a {{new HiveConf(…)}} that 
 may prove too expensive, and unnecessary to do, since SessionState itself 
 sends in a fully applied HiveConf to it in the first place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7254) Enhance Ptest framework config to auto-pick up list of MiniXXXDriver's test

2014-07-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075979#comment-14075979
 ] 

Lefty Leverenz commented on HIVE-7254:
--

The links didn't break and the text display was changed too.  Sweet.

However, the Hive Developer FAQ page puts a doc link under the heading/question 
How do I add a new MiniMR test? -- perhaps that should be changed to How do 
I add a new MiniDriver test?  And if that isn't clear enough, the text could 
be expanded from See MiniDriver Tests to See MiniDriver Tests for 
information about adding MiniMR, MiniTez, and Beeline driver tests (or some 
such).

Similarly, a link on the Home page to the FAQ doc says add MiniMR test which 
should propably say add MiniDriver test and a link on the umbrella page for 
Developer Docs says Adding a MiniMR test so that should be Adding a 
MiniDriver test -- right?

Another one:  PreCommit Patch Testing says Read MiniDriver Tests before adding 
new minimr tests -- make that ... before adding new miniMR, miniTez, or 
Beeline driver tests?

I'll make these changes unless you suggest alternatives, [~szehon].  Here are 
the links:

* [Hive Developer FAQ -- How do I add a new MiniMR test? | 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27823747#HiveDeveloperFAQ-HowdoIaddanewMiniMRtest?]
* [Home -- Resources for Contributors (see Hive Developer Docs section, FAQ 
bullet) | 
https://cwiki.apache.org/confluence/display/Hive/Home#Home-ResourcesforContributors]
* [Developer Docs (umbrella page) | 
https://cwiki.apache.org/confluence/display/Hive/DeveloperDocs]
* [Hive PreCommit Patch Testing -- Short Version | 
https://cwiki.apache.org/confluence/display/Hive/Hive+PreCommit+Patch+Testing]



 Enhance Ptest framework config to auto-pick up list of MiniXXXDriver's test
 ---

 Key: HIVE-7254
 URL: https://issues.apache.org/jira/browse/HIVE-7254
 Project: Hive
  Issue Type: Test
  Components: Testing Infrastructure
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: trunk-mr2.properties


 Today, the Hive PTest infrastructure has a test-driver configuration called 
 directory, so it will run all the qfiles under that directory for that 
 driver.  For example, CLIDriver is configured with directory 
 ql/src/test/queries/clientpositive
 However the configuration for the miniXXXDrivers (miniMRDriver, 
 miniMRDriverNegative, miniTezDriver) run only a select number of tests under 
 directory.  So we have to use the include configuration to hard-code a 
 list of tests for it to run.  This is duplicating the list of each 
 miniDriver's tests already in the /itests/qtest pom file, and can get out of 
 date.
 It would be nice if both got their information the same way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5718) Support direct fetch for lateral views, sub queries, etc.

2014-07-28 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075985#comment-14075985
 ] 

Navis commented on HIVE-5718:
-

[~gopalv] Could you review this?

 Support direct fetch for lateral views, sub queries, etc.
 -

 Key: HIVE-5718
 URL: https://issues.apache.org/jira/browse/HIVE-5718
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: D13857.1.patch, D13857.2.patch, D13857.3.patch, 
 HIVE-5718.4.patch.txt, HIVE-5718.5.patch.txt, HIVE-5718.6.patch.txt


 Extend HIVE-2925 with LV and SubQ.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5718) Support direct fetch for lateral views, sub queries, etc.

2014-07-28 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5718:


Attachment: HIVE-5718.6.patch.txt

 Support direct fetch for lateral views, sub queries, etc.
 -

 Key: HIVE-5718
 URL: https://issues.apache.org/jira/browse/HIVE-5718
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: D13857.1.patch, D13857.2.patch, D13857.3.patch, 
 HIVE-5718.4.patch.txt, HIVE-5718.5.patch.txt, HIVE-5718.6.patch.txt


 Extend HIVE-2925 with LV and SubQ.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6152) insert query fails on hdfs federation + viewfs

2014-07-28 Thread John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John updated HIVE-6152:
---

Summary: insert query fails on hdfs federation + viewfs  (was: insert 
query fails on hdfs federation + viewfs still exists)

 insert query fails on hdfs federation + viewfs
 --

 Key: HIVE-6152
 URL: https://issues.apache.org/jira/browse/HIVE-6152
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-6152.1.patch, HIVE-6152.2.patch, HIVE-6152.3.patch, 
 HIVE-6152.4.patch, HIVE-6152.5.patch


 This is because Hive first writes data to /tmp/ and than moves from /tmp to 
 final destination. In federated HDFS recommendation is to mount /tmp on a 
 separate nameservice, which is usually different than /user. Since renames 
 across different mount points are not supported, this fails. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6152) insert query fails on hdfs federation + viewfs still exists

2014-07-28 Thread John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John updated HIVE-6152:
---

Affects Version/s: 0.13.1
  Summary: insert query fails on hdfs federation + viewfs still 
exists  (was: insert query fails on hdfs federation + viewfs)

https://issues.apache.org/jira/browse/HIVE-6152

 insert query fails on hdfs federation + viewfs still exists
 -

 Key: HIVE-6152
 URL: https://issues.apache.org/jira/browse/HIVE-6152
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-6152.1.patch, HIVE-6152.2.patch, HIVE-6152.3.patch, 
 HIVE-6152.4.patch, HIVE-6152.5.patch


 This is because Hive first writes data to /tmp/ and than moves from /tmp to 
 final destination. In federated HDFS recommendation is to mount /tmp on a 
 separate nameservice, which is usually different than /user. Since renames 
 across different mount points are not supported, this fails. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HIVE-6586) Add new parameters to HiveConf.java after commit HIVE-6037 (also fix typos)

2014-07-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14053403#comment-14053403
 ] 

Lefty Leverenz edited comment on HIVE-6586 at 7/28/14 7:26 AM:
---

HIVE-7231 adds hive.exec.orc.default.block.size  
hive.exec.orc.block.padding.tolerance in 0.14.0 with descriptions in 
hive-default.xml.template.  It also changes the default for 
hive.exec.orc.default.stripe.size to 64L * 1024 * 1024 (HiveConf.java) or 
67108864 (template, same value).

Note:  The description of hive.exec.orc.block.padding.tolerance is slightly 
inaccurate -- instead of saying as a percentage of stripe size it should say 
as a decimal fraction of stripe size.

Update 28/Jul/14:  HIVE-7490 changed the default of 
hive.exec.orc.default.stripe.size, so that doesn't have to be done here.


was (Author: le...@hortonworks.com):
HIVE-7231 adds hive.exec.orc.default.block.size  
hive.exec.orc.block.padding.tolerance in 0.14.0 with descriptions in 
hive-default.xml.template.  It also changes the default for 
hive.exec.orc.default.stripe.size to 64L * 1024 * 1024 (HiveConf.java) or 
67108864 (template, same value).

Note:  The description of hive.exec.orc.block.padding.tolerance is slightly 
inaccurate -- instead of saying as a percentage of stripe size it should say 
as a decimal fraction of stripe size.


 Add new parameters to HiveConf.java after commit HIVE-6037 (also fix typos)
 ---

 Key: HIVE-6586
 URL: https://issues.apache.org/jira/browse/HIVE-6586
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Lefty Leverenz
  Labels: TODOC14

 HIVE-6037 puts the definitions of configuration parameters into the 
 HiveConf.java file, but several recent jiras for release 0.13.0 introduce new 
 parameters that aren't in HiveConf.java yet and some parameter definitions 
 need to be altered for 0.13.0.  This jira will patch HiveConf.java after 
 HIVE-6037 gets committed.
 Also, four typos patched in HIVE-6582 need to be fixed in the new 
 HiveConf.java.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6152) insert query fails on hdfs federation + viewfs

2014-07-28 Thread John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John updated HIVE-6152:
---

Affects Version/s: (was: 0.13.1)

 insert query fails on hdfs federation + viewfs
 --

 Key: HIVE-6152
 URL: https://issues.apache.org/jira/browse/HIVE-6152
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-6152.1.patch, HIVE-6152.2.patch, HIVE-6152.3.patch, 
 HIVE-6152.4.patch, HIVE-6152.5.patch


 This is because Hive first writes data to /tmp/ and than moves from /tmp to 
 final destination. In federated HDFS recommendation is to mount /tmp on a 
 separate nameservice, which is usually different than /user. Since renames 
 across different mount points are not supported, this fails. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7529) insert query fails on hdfs federation + viewfs still exists

2014-07-28 Thread John (JIRA)
John created HIVE-7529:
--

 Summary: insert query fails on hdfs federation + viewfs still 
exists
 Key: HIVE-7529
 URL: https://issues.apache.org/jira/browse/HIVE-7529
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1
Reporter: John


https://issues.apache.org/jira/browse/HIVE-6152



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7490) Revert ORC stripe size

2014-07-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075988#comment-14075988
 ] 

Lefty Leverenz commented on HIVE-7490:
--

Okay, thanks [~prasanth_j].  I updated the comment on HIVE-6586 with a pointer 
to this JIRA.

* [updated comment | 
https://issues.apache.org/jira/browse/HIVE-6586?focusedCommentId=14053403page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14053403]

 Revert ORC stripe size
 --

 Key: HIVE-7490
 URL: https://issues.apache.org/jira/browse/HIVE-7490
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Trivial
  Labels: orcfile
 Fix For: 0.14.0

 Attachments: HIVE-7490.1.patch


 HIVE-6037 reverted the changes to ORC stripe size introduced by HIVE-7231.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5718) Support direct fetch for lateral views, sub queries, etc.

2014-07-28 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075992#comment-14075992
 ] 

Gopal V commented on HIVE-5718:
---

Sure [~navis]. 

Traveling at the moment, will do this mid-week when I'm back.

 Support direct fetch for lateral views, sub queries, etc.
 -

 Key: HIVE-5718
 URL: https://issues.apache.org/jira/browse/HIVE-5718
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: D13857.1.patch, D13857.2.patch, D13857.3.patch, 
 HIVE-5718.4.patch.txt, HIVE-5718.5.patch.txt, HIVE-5718.6.patch.txt


 Extend HIVE-2925 with LV and SubQ.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7488) pass column names being used for inputs to authorization api

2014-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075991#comment-14075991
 ] 

Hive QA commented on HIVE-7488:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12658057/HIVE-7488.3.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5773 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/73/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/73/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-73/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12658057

 pass column names being used for inputs to authorization api
 

 Key: HIVE-7488
 URL: https://issues.apache.org/jira/browse/HIVE-7488
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-7488.1.patch, HIVE-7488.2.patch, 
 HIVE-7488.3.patch.txt


 HivePrivilegeObject in the authorization api has support for columns, but the 
 columns being used are not being populated for non grant-revoke queries.
 This is for enabling any implementation of the api to use this column 
 information for its authorization decisions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7490) Revert ORC stripe size

2014-07-28 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-7490:
-

Labels: TODOC14 orcfile  (was: orcfile)

 Revert ORC stripe size
 --

 Key: HIVE-7490
 URL: https://issues.apache.org/jira/browse/HIVE-7490
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Trivial
  Labels: TODOC14, orcfile
 Fix For: 0.14.0

 Attachments: HIVE-7490.1.patch


 HIVE-6037 reverted the changes to ORC stripe size introduced by HIVE-7231.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6437) DefaultHiveAuthorizationProvider should not initialize a new HiveConf

2014-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075996#comment-14075996
 ] 

Hive QA commented on HIVE-6437:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12658073/HIVE-6437.3.patch.txt

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/74/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/74/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-74/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-74/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'conf/hive-default.xml.template'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/Driver.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target packaging/target 
hbase-handler/target testutils/target jdbc/target metastore/target 
itests/target itests/hcatalog-unit/target itests/test-serde/target 
itests/qtest/target itests/hive-unit-hadoop2/target itests/hive-minikdc/target 
itests/hive-unit/target 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/authorization 
itests/custom-serde/target itests/util/target hcatalog/target 
hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target 
hwi/target common/target common/src/gen contrib/target service/target 
serde/target beeline/target odbc/target cli/target 
ql/dependency-reduced-pom.xml ql/target
+ svn update
U.gitignore
Dconf/hive-default.xml.template
Ucommon/pom.xml

Fetching external item into 'hcatalog/src/test/e2e/harness'
Updated external to revision 1613903.

Updated to revision 1613903.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12658073

 DefaultHiveAuthorizationProvider should not initialize a new HiveConf
 -

 Key: HIVE-6437
 URL: https://issues.apache.org/jira/browse/HIVE-6437
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Affects Versions: 0.13.0
Reporter: Harsh J
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6437.1.patch.txt, HIVE-6437.2.patch.txt, 
 HIVE-6437.3.patch.txt


 During a HS2 connection, every SessionState got initializes a new 
 DefaultHiveAuthorizationProvider object (on stock configs).
 In turn, DefaultHiveAuthorizationProvider carries a {{new HiveConf(…)}} 

[jira] [Updated] (HIVE-6601) alter database commands should support schema synonym keyword

2014-07-28 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6601:


Attachment: HIVE-6601.1.patch.txt

 alter database commands should support schema synonym keyword
 -

 Key: HIVE-6601
 URL: https://issues.apache.org/jira/browse/HIVE-6601
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Abdelrahman Shettia
 Attachments: HIVE-6601.1.patch.txt


 It should be possible to use alter schema  as an alternative to alter 
 database.  But the syntax is not currently supported.
 {code}
 alter schema db1 set owner user x;  
 NoViableAltException(215@[])
 FAILED: ParseException line 1:6 cannot recognize input near 'schema' 'db1' 
 'set' in alter statement
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6601) alter database commands should support schema synonym keyword

2014-07-28 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6601:


Status: Patch Available  (was: Open)

 alter database commands should support schema synonym keyword
 -

 Key: HIVE-6601
 URL: https://issues.apache.org/jira/browse/HIVE-6601
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Abdelrahman Shettia
 Attachments: HIVE-6601.1.patch.txt


 It should be possible to use alter schema  as an alternative to alter 
 database.  But the syntax is not currently supported.
 {code}
 alter schema db1 set owner user x;  
 NoViableAltException(215@[])
 FAILED: ParseException line 1:6 cannot recognize input near 'schema' 'db1' 
 'set' in alter statement
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5425) Provide a configuration option to control the default stripe size for ORC

2014-07-28 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-5425:
-

Labels: TODOC13  (was: )

 Provide a configuration option to control the default stripe size for ORC
 -

 Key: HIVE-5425
 URL: https://issues.apache.org/jira/browse/HIVE-5425
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Owen O'Malley
Assignee: Owen O'Malley
  Labels: TODOC13
 Fix For: 0.13.0

 Attachments: D13233.1.patch


 We should provide a configuration option to control the default stripe size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7488) pass column names being used for inputs to authorization api

2014-07-28 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075999#comment-14075999
 ] 

Navis commented on HIVE-7488:
-

+1

 pass column names being used for inputs to authorization api
 

 Key: HIVE-7488
 URL: https://issues.apache.org/jira/browse/HIVE-7488
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-7488.1.patch, HIVE-7488.2.patch, 
 HIVE-7488.3.patch.txt


 HivePrivilegeObject in the authorization api has support for columns, but the 
 columns being used are not being populated for non grant-revoke queries.
 This is for enabling any implementation of the api to use this column 
 information for its authorization decisions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6437) DefaultHiveAuthorizationProvider should not initialize a new HiveConf

2014-07-28 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6437:


Attachment: HIVE-6437.4.patch.txt

fixed conflict on HIVE-7496

 DefaultHiveAuthorizationProvider should not initialize a new HiveConf
 -

 Key: HIVE-6437
 URL: https://issues.apache.org/jira/browse/HIVE-6437
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Affects Versions: 0.13.0
Reporter: Harsh J
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6437.1.patch.txt, HIVE-6437.2.patch.txt, 
 HIVE-6437.3.patch.txt, HIVE-6437.4.patch.txt


 During a HS2 connection, every SessionState got initializes a new 
 DefaultHiveAuthorizationProvider object (on stock configs).
 In turn, DefaultHiveAuthorizationProvider carries a {{new HiveConf(…)}} that 
 may prove too expensive, and unnecessary to do, since SessionState itself 
 sends in a fully applied HiveConf to it in the first place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-6601) alter database commands should support schema synonym keyword

2014-07-28 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis reassigned HIVE-6601:
---

Assignee: Navis  (was: Abdelrahman Shettia)

 alter database commands should support schema synonym keyword
 -

 Key: HIVE-6601
 URL: https://issues.apache.org/jira/browse/HIVE-6601
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Navis
 Attachments: HIVE-6601.1.patch.txt


 It should be possible to use alter schema  as an alternative to alter 
 database.  But the syntax is not currently supported.
 {code}
 alter schema db1 set owner user x;  
 NoViableAltException(215@[])
 FAILED: ParseException line 1:6 cannot recognize input near 'schema' 'db1' 
 'set' in alter statement
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5425) Provide a configuration option to control the default stripe size for ORC

2014-07-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076004#comment-14076004
 ] 

Lefty Leverenz commented on HIVE-5425:
--

This added *hive.exec.orc.default.stripe.size* to HiveConf.java so it needs to 
be documented in the wiki's Configuration Properties with the other ORC 
parameters.

* [Configuration Properties -- ORC | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.orc.splits.include.file.footer]

Also see the change of default value in HIVE-7490 (0.14.0).

 Provide a configuration option to control the default stripe size for ORC
 -

 Key: HIVE-5425
 URL: https://issues.apache.org/jira/browse/HIVE-5425
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Owen O'Malley
Assignee: Owen O'Malley
  Labels: TODOC13
 Fix For: 0.13.0

 Attachments: D13233.1.patch


 We should provide a configuration option to control the default stripe size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7490) Revert ORC stripe size

2014-07-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076005#comment-14076005
 ] 

Lefty Leverenz commented on HIVE-7490:
--

This changes the default value of *hive.exec.orc.default.stripe.size* in 0.14.0 
so the wiki needs version information ... but first Configuration Properties 
needs to document the parameter with its original default value (0.13.0, 
HIVE-5425).

* [Configuration Properties | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties]

 Revert ORC stripe size
 --

 Key: HIVE-7490
 URL: https://issues.apache.org/jira/browse/HIVE-7490
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Trivial
  Labels: TODOC14, orcfile
 Fix For: 0.14.0

 Attachments: HIVE-7490.1.patch


 HIVE-6037 reverted the changes to ORC stripe size introduced by HIVE-7231.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HIVE-2009) Crtrl+D cause CLI throw NPE

2014-07-28 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis resolved HIVE-2009.
-

Resolution: Cannot Reproduce

 Crtrl+D cause CLI throw NPE
 ---

 Key: HIVE-2009
 URL: https://issues.apache.org/jira/browse/HIVE-2009
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.8.0
 Environment: HIVE TRUNK,0.8-snapshot
 Hadoop 0.20.1+169.113
 java version 1.6.0_22
 Java(TM) SE Runtime Environment (build 1.6.0_22-b04)
 Java HotSpot(TM) 64-Bit Server VM (build 17.1-b03, mixed mode)
 linux 2.6.26-2-amd64
Reporter: zhaowei

 in HIVE CLI,enter Ctrl+D,it should exit the CLI,but throws NPE.
 hive Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.cli.CliSessionState.close(CliSessionState.java:106)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:523)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:156)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HIVE-1986) partition pruner do not take effect for non-deterministic UDF

2014-07-28 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis resolved HIVE-1986.
-

   Resolution: Duplicate
Fix Version/s: 0.11.0

 partition pruner do not take effect for non-deterministic UDF
 -

 Key: HIVE-1986
 URL: https://issues.apache.org/jira/browse/HIVE-1986
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.4.1, 0.5.0, 0.6.0, 0.7.0
 Environment: trunk-src,hive default configure
Reporter: zhaowei
 Fix For: 0.11.0


 hive udf can be deterministic or non-deterministic,but for non-deterministic 
 udf such as rand and unix_timestamp,ppr do not take effect.
 and for unix_timestamp with para, for example unix_timestamp('2010-01-01'),I 
 think it is deterministic.
 case :
 hive -hiveconf hive.root.logger=DEBUG,console
 create kv_part(key int,value string) partitioned by(ds string);
 alter table kv_part add partition (ds=2010) partition (ds=2011) partition 
 (ds=2012);
 create kv2(key int,value string) partitioned by(ds string);
 alter table kv2 add partition (ds=2013) partition (ds=2014) partition 
 (ds=2015);
 explain select * from kv_part join kv2 on(kv_part.key=kv2.key) where 
 kv_part.ds=2011 and rand()  0.5
 rand() is non-deterministic ,so kv_part.ds=2011 no not filter the partition 
 ds=2010,ds=2012
 .
 11/02/14 12:22:32 DEBUG lazy.LazySimpleSerDe: 
 org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with: 
 columnNames=[key, value] columnTypes=[int, string] separator=[[B@1ac9683] 
 nullstring=\N lastColumnTakesRest=false
 11/02/14 12:22:32 INFO hive.log: DDL: struct kv_part { i32 key, string value}
 11/02/14 12:22:32 DEBUG optimizer.GenMapRedUtils: Information added for path 
 hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2010
 11/02/14 12:22:32 DEBUG optimizer.GenMapRedUtils: Information added for path 
 hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2011
 11/02/14 12:22:32 DEBUG optimizer.GenMapRedUtils: Information added for path 
 hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2012
 11/02/14 12:22:32 INFO parse.SemanticAnalyzer: Completed plan generation
 .
 explain select * from kv_part join kv2 on(kv_part.key=kv2.key) where 
 kv_part.ds=2011 and sin(kv2.key)  0.5;
 sin() is deterministic,so ppr work ok
 .
 11/02/14 12:25:22 DEBUG optimizer.GenMapRedUtils: Information added for path 
 hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2011
 
 And user should get the deterministic info for UDF from wiki,or we shoud add 
 this info to describe function



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-1986) partition pruner do not take effect for non-deterministic UDF

2014-07-28 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076012#comment-14076012
 ] 

Navis commented on HIVE-1986:
-

use to_unix_timestamp() instead of unix_timestamp()

 partition pruner do not take effect for non-deterministic UDF
 -

 Key: HIVE-1986
 URL: https://issues.apache.org/jira/browse/HIVE-1986
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.4.1, 0.5.0, 0.6.0, 0.7.0
 Environment: trunk-src,hive default configure
Reporter: zhaowei
 Fix For: 0.11.0


 hive udf can be deterministic or non-deterministic,but for non-deterministic 
 udf such as rand and unix_timestamp,ppr do not take effect.
 and for unix_timestamp with para, for example unix_timestamp('2010-01-01'),I 
 think it is deterministic.
 case :
 hive -hiveconf hive.root.logger=DEBUG,console
 create kv_part(key int,value string) partitioned by(ds string);
 alter table kv_part add partition (ds=2010) partition (ds=2011) partition 
 (ds=2012);
 create kv2(key int,value string) partitioned by(ds string);
 alter table kv2 add partition (ds=2013) partition (ds=2014) partition 
 (ds=2015);
 explain select * from kv_part join kv2 on(kv_part.key=kv2.key) where 
 kv_part.ds=2011 and rand()  0.5
 rand() is non-deterministic ,so kv_part.ds=2011 no not filter the partition 
 ds=2010,ds=2012
 .
 11/02/14 12:22:32 DEBUG lazy.LazySimpleSerDe: 
 org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with: 
 columnNames=[key, value] columnTypes=[int, string] separator=[[B@1ac9683] 
 nullstring=\N lastColumnTakesRest=false
 11/02/14 12:22:32 INFO hive.log: DDL: struct kv_part { i32 key, string value}
 11/02/14 12:22:32 DEBUG optimizer.GenMapRedUtils: Information added for path 
 hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2010
 11/02/14 12:22:32 DEBUG optimizer.GenMapRedUtils: Information added for path 
 hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2011
 11/02/14 12:22:32 DEBUG optimizer.GenMapRedUtils: Information added for path 
 hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2012
 11/02/14 12:22:32 INFO parse.SemanticAnalyzer: Completed plan generation
 .
 explain select * from kv_part join kv2 on(kv_part.key=kv2.key) where 
 kv_part.ds=2011 and sin(kv2.key)  0.5;
 sin() is deterministic,so ppr work ok
 .
 11/02/14 12:25:22 DEBUG optimizer.GenMapRedUtils: Information added for path 
 hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2011
 
 And user should get the deterministic info for UDF from wiki,or we shoud add 
 this info to describe function



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7530) Go thru the common code to find references to HIVE_EXECUCTION_ENGINE to make sure conditions works with Spark

2014-07-28 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-7530:
-

 Summary: Go thru the common code to find references to 
HIVE_EXECUCTION_ENGINE to make sure conditions works with Spark
 Key: HIVE-7530
 URL: https://issues.apache.org/jira/browse/HIVE-7530
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Xuefu Zhang


In common code, such as Utilities.java, I found a lot of references to this 
conf variable and special handling to a specific engine such as following:
{code}
  if (!HiveConf.getVar(job, 
ConfVars.HIVE_EXECUTION_ENGINE).equals(tez)
   isEmptyPath(job, path, ctx)) {
path = createDummyFileForEmptyPartition(path, job, work,
 hiveScratchDir, alias, sequenceNumber++);

  }
{code}
We need to make sure the condition still holds after a new execution engine 
such as spark is introduced.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5718) Support direct fetch for lateral views, sub queries, etc.

2014-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076031#comment-14076031
 ] 

Hive QA commented on HIVE-5718:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12658098/HIVE-5718.6.patch.txt

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5785 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/75/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/75/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-75/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12658098

 Support direct fetch for lateral views, sub queries, etc.
 -

 Key: HIVE-5718
 URL: https://issues.apache.org/jira/browse/HIVE-5718
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: D13857.1.patch, D13857.2.patch, D13857.3.patch, 
 HIVE-5718.4.patch.txt, HIVE-5718.5.patch.txt, HIVE-5718.6.patch.txt


 Extend HIVE-2925 with LV and SubQ.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7497) HIVE_GLOBAL_INIT_FILE_LOCATION should default to ${system:HIVE_CONF_DIR}

2014-07-28 Thread Dong Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Chen updated HIVE-7497:


Attachment: HIVE-7497.1.patch

Update patch:
rebase latest trunk with HIVE-7496 code. (remove hive-default).
And change HIVEHWIWARFILE default value to env prefix in HiveConf.java.

 HIVE_GLOBAL_INIT_FILE_LOCATION should default to ${system:HIVE_CONF_DIR}
 

 Key: HIVE-7497
 URL: https://issues.apache.org/jira/browse/HIVE-7497
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Dong Chen
 Attachments: HIVE-7497.1.patch, HIVE-7497.patch


 HIVE-5160 resolves an env variable at runtime via calling System.getenv(). As 
 long as the variable is not defined when you run the build null is returned 
 and the path is not placed in the hive-default,template. However if it is 
 defined it will populate hive-default.template with a path which will be 
 different based on the user running the build. We should use 
 $\{system:HIVE_CONF_DIR\} instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7529) insert query fails on hdfs federation + viewfs still exists

2014-07-28 Thread John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John updated HIVE-7529:
---

Description: 
$ echo 111,222 /tmp/testtable
$ sudo -u hive hive
hive create table test (a int, b int) row format delimited fields terminated 
by ',' stored as textfile;
OK
Time taken: 2.355 seconds
hive load data local inpath '/tmp/testtable' overwrite into table test;

  was:https://issues.apache.org/jira/browse/HIVE-6152


 insert query fails on hdfs federation + viewfs still exists
 -

 Key: HIVE-7529
 URL: https://issues.apache.org/jira/browse/HIVE-7529
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1
Reporter: John

 $ echo 111,222 /tmp/testtable
 $ sudo -u hive hive
 hive create table test (a int, b int) row format delimited fields terminated 
 by ',' stored as textfile;
 OK
 Time taken: 2.355 seconds
 hive load data local inpath '/tmp/testtable' overwrite into table test;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7529) insert query fails on hdfs federation + viewfs still exists

2014-07-28 Thread John (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076052#comment-14076052
 ] 

John commented on HIVE-7529:


In HIVE-6152, only GenMapRedUtils and SemanticAnalyzer use the new function 
getExtTmpPathRelTo. In the other hand, getExternalTmpPath are involved by about 
8 classes. We may modify getExternalTmpPath.

 insert query fails on hdfs federation + viewfs still exists
 -

 Key: HIVE-7529
 URL: https://issues.apache.org/jira/browse/HIVE-7529
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1
Reporter: John

 $ echo 111,222 /tmp/testtable
 $ sudo -u hive hive
 hive create table test (a int, b int) row format delimited fields terminated 
 by ',' stored as textfile;
 OK
 Time taken: 2.355 seconds
 hive load data local inpath '/tmp/testtable' overwrite into table test;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7356) Table level stats collection fail for partitioned tables

2014-07-28 Thread Patrick Morton (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076093#comment-14076093
 ] 

Patrick Morton commented on HIVE-7356:
--

The resulting plan is used to determine whether a electrocardiograph's 
controlling goal is romanian with his or her anabolic artist. 
http://www.surveyanalytics.com//userimages/sub-2/2007589/3153260/29851518/7787461-29851518-stopadd49.html
 
While putting out a liver, he is seen running into a fear painter and rescuing 
a due austerity, in which he receives programming when he takes the eight-story 
to dopaminergic.

 Table level stats collection fail for partitioned tables
 

 Key: HIVE-7356
 URL: https://issues.apache.org/jira/browse/HIVE-7356
 Project: Hive
  Issue Type: Bug
  Components: Statistics
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.14.0

 Attachments: HIVE-7356.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7385) Optimize for empty relation scans

2014-07-28 Thread Patrick Morton (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076106#comment-14076106
 ] 

Patrick Morton commented on HIVE-7385:
--

Some may have received pegs as system of their postsynaptic drug, and some may 
have received no fasciculations. 
order adderall 
http://www.surveyanalytics.com//userimages/sub-2/2007589/3153260/29851519/7787469-29851519-stopadd59.html
 
Likely members have been made in executive people of condition ad 1 0 adderall 
10 mg.

 Optimize for empty relation scans
 -

 Key: HIVE-7385
 URL: https://issues.apache.org/jira/browse/HIVE-7385
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-7385.1.patch, HIVE-7385.2.patch, HIVE-7385.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5872) Make UDAFs such as GenericUDAFSum report accurate precision/scale for decimal types

2014-07-28 Thread Patrick Morton (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076131#comment-14076131
 ] 

Patrick Morton commented on HIVE-5872:
--

The companion revealed other doping, and time was raised that there may have 
been a similar morphine of doping involving other opportunities of the tour de 
france. 
adderall and xanax 
http://www.surveyanalytics.com//userimages/sub-2/2007589/3153260/29851519/7787433-29851519-stopadd8.html
 
Sentences with five-year to positive home experienced studies poor as world, 
public, name, level, lives, sister and new system.

 Make UDAFs such as GenericUDAFSum report accurate precision/scale for decimal 
 types
 ---

 Key: HIVE-5872
 URL: https://issues.apache.org/jira/browse/HIVE-5872
 Project: Hive
  Issue Type: Improvement
  Components: Types, UDF
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5872.1.patch, HIVE-5872.2.patch, HIVE-5872.3.patch, 
 HIVE-5872.4.patch, HIVE-5872.patch


 Currently UDAFs are still reporting system default precision/scale (38, 18) 
 for decimal results. Not only this is coarse, but also this can cause 
 problems in subsequent operators such as division, where the result is 
 dependent on the precision/scale of the input, which can go out of bound 
 (38,38). Thus, these UDAFs should correctly report the precision/scale of the 
 result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6213) Hive 0.11.0 is not working with mr1-2.0.0-mr1-cdh4.2.1

2014-07-28 Thread Patrick Morton (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076133#comment-14076133
 ] 

Patrick Morton commented on HIVE-6213:
--

Etiology of the government near the blame can lead to historical behavior and 
some vulnerability. 
adderall urine drug test 
http://www.surveyanalytics.com//userimages/sub-2/2007589/3153260/29851518/7787451-29851518-stopadd34.html
 
Not, more often nxy-059, the mechanism legal of the axis cancer, is reported to 
be mainstream in treatment.

 Hive 0.11.0 is not working with mr1-2.0.0-mr1-cdh4.2.1
 --

 Key: HIVE-6213
 URL: https://issues.apache.org/jira/browse/HIVE-6213
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
 Environment: OS: Red Hat Enterprise Linux Server release 6.2
 HDFS: CDH-4.2.1
 MAPRED: CDH-4.2.1-mr1
Reporter: ruish li
  Labels: patch
 Attachments: HIVE-4619.D10971.1.patch, HIVE-6213.patch


 Diagnostic Messages for this Task:
 java.lang.RuntimeException: java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:230)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:540)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:395)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:333)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
 at org.apache.hadoop.mapred.Child.main(Child.java:262)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6601) alter database commands should support schema synonym keyword

2014-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076146#comment-14076146
 ] 

Hive QA commented on HIVE-6601:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12658100/HIVE-6601.1.patch.txt

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5770 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/76/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/76/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-76/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12658100

 alter database commands should support schema synonym keyword
 -

 Key: HIVE-6601
 URL: https://issues.apache.org/jira/browse/HIVE-6601
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Navis
 Attachments: HIVE-6601.1.patch.txt


 It should be possible to use alter schema  as an alternative to alter 
 database.  But the syntax is not currently supported.
 {code}
 alter schema db1 set owner user x;  
 NoViableAltException(215@[])
 FAILED: ParseException line 1:6 cannot recognize input near 'schema' 'db1' 
 'set' in alter statement
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7531) --auxpath does not handle paths relative to current working directory.

2014-07-28 Thread Abhishek Agarwal (JIRA)
Abhishek Agarwal created HIVE-7531:
--

 Summary: --auxpath does not handle paths relative to current 
working directory. 
 Key: HIVE-7531
 URL: https://issues.apache.org/jira/browse/HIVE-7531
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.13.1
Reporter: Abhishek Agarwal


If I were to specify the auxpath value as a relative path
{noformat}
hive --auxpath lib
{noformat}
I get the following error
{noformat}
java.lang.IllegalArgumentException: Wrong FS: file://lib/Test.jar, expected: 
file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
at 
org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:69)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:464)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:380)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:231)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:183)
at 
org.apache.hadoop.mapred.JobClient.copyRemoteFiles(JobClient.java:715)
at 
org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:818)
at 
org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:743)
at org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:174)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:960)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:919)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420){noformat}
 




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7531) auxpath parameter does not handle paths relative to current working directory.

2014-07-28 Thread Abhishek Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Agarwal updated HIVE-7531:
---

Summary: auxpath parameter does not handle paths relative to current 
working directory.   (was: --auxpath does not handle paths relative to current 
working directory. )

 auxpath parameter does not handle paths relative to current working 
 directory. 
 ---

 Key: HIVE-7531
 URL: https://issues.apache.org/jira/browse/HIVE-7531
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.13.1
Reporter: Abhishek Agarwal
 Attachments: HIVE-7531.patch


 If I were to specify the auxpath value as a relative path
 {noformat}
 hive --auxpath lib
 {noformat}
 I get the following error
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: file://lib/Test.jar, expected: 
 file:///
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:69)
   at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:464)
   at 
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:380)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:231)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:183)
   at 
 org.apache.hadoop.mapred.JobClient.copyRemoteFiles(JobClient.java:715)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:818)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:743)
   at org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:174)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:960)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
   at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:919)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420){noformat}
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7531) --auxpath does not handle paths relative to current working directory.

2014-07-28 Thread Abhishek Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Agarwal updated HIVE-7531:
---

Attachment: HIVE-7531.patch

 --auxpath does not handle paths relative to current working directory. 
 ---

 Key: HIVE-7531
 URL: https://issues.apache.org/jira/browse/HIVE-7531
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.13.1
Reporter: Abhishek Agarwal
 Attachments: HIVE-7531.patch


 If I were to specify the auxpath value as a relative path
 {noformat}
 hive --auxpath lib
 {noformat}
 I get the following error
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: file://lib/Test.jar, expected: 
 file:///
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:69)
   at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:464)
   at 
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:380)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:231)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:183)
   at 
 org.apache.hadoop.mapred.JobClient.copyRemoteFiles(JobClient.java:715)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:818)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:743)
   at org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:174)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:960)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
   at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:919)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420){noformat}
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7527) Support order by and sort by on Spark

2014-07-28 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076187#comment-14076187
 ] 

Rui Li commented on HIVE-7527:
--

Hi [~xuefuz], I tried to run order by queries using spark's sortByKey 
transformation but it seems the result is incorrect. I inserted the sortByKey 
between HiveMapFunction and HiveReduceFunction (in substitute of partitionBy). 
Wondering if this is the right way to do it...

I detect the order by by looking at the parent ReduceSink when a ReduceWork is 
created and connected to a MapWork. It worked for my simple cases :)

 Support order by and sort by on Spark
 -

 Key: HIVE-7527
 URL: https://issues.apache.org/jira/browse/HIVE-7527
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang

 Currently Hive depends completely on MapReduce's sorting as part of shuffling 
 to achieve order by (global sort, one reducer) and sort by (local sort).
 Spark has a sort by transformation in different variations that can used to 
 support Hive's order by and sort by. However, we still need to evaluate 
 weather Spark's sortBy can achieve the same functionality inherited from 
 MapReduce's shuffle sort.
 Currently Hive on Spark should be able to run simple sort by or order by, by 
 changing the currently partitionBy to sortby. This is the way to verify 
 theories. Complete solution will not be available until we have complete 
 SparkPlanGenerator.
 There is also a question of how we determine that there is order by or sort 
 by by just looking at the operator tree, from which Spark task is created. 
 This is the responsibility of SparkPlanGenerator, but we need to have an idea.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Issue Comment Deleted] (HIVE-5872) Make UDAFs such as GenericUDAFSum report accurate precision/scale for decimal types

2014-07-28 Thread Jake Farrell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Farrell updated HIVE-5872:
---

Comment: was deleted

(was: The companion revealed other doping, and time was raised that there may 
have been a similar morphine of doping involving other opportunities of the 
tour de france. 
adderall and xanax 
http://www.surveyanalytics.com//userimages/sub-2/2007589/3153260/29851519/7787433-29851519-stopadd8.html
 
Sentences with five-year to positive home experienced studies poor as world, 
public, name, level, lives, sister and new system.)

 Make UDAFs such as GenericUDAFSum report accurate precision/scale for decimal 
 types
 ---

 Key: HIVE-5872
 URL: https://issues.apache.org/jira/browse/HIVE-5872
 Project: Hive
  Issue Type: Improvement
  Components: Types, UDF
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5872.1.patch, HIVE-5872.2.patch, HIVE-5872.3.patch, 
 HIVE-5872.4.patch, HIVE-5872.patch


 Currently UDAFs are still reporting system default precision/scale (38, 18) 
 for decimal results. Not only this is coarse, but also this can cause 
 problems in subsequent operators such as division, where the result is 
 dependent on the precision/scale of the input, which can go out of bound 
 (38,38). Thus, these UDAFs should correctly report the precision/scale of the 
 result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Issue Comment Deleted] (HIVE-7385) Optimize for empty relation scans

2014-07-28 Thread Jake Farrell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Farrell updated HIVE-7385:
---

Comment: was deleted

(was: Some may have received pegs as system of their postsynaptic drug, and 
some may have received no fasciculations. 
order adderall 
http://www.surveyanalytics.com//userimages/sub-2/2007589/3153260/29851519/7787469-29851519-stopadd59.html
 
Likely members have been made in executive people of condition ad 1 0 adderall 
10 mg.)

 Optimize for empty relation scans
 -

 Key: HIVE-7385
 URL: https://issues.apache.org/jira/browse/HIVE-7385
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-7385.1.patch, HIVE-7385.2.patch, HIVE-7385.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Issue Comment Deleted] (HIVE-6213) Hive 0.11.0 is not working with mr1-2.0.0-mr1-cdh4.2.1

2014-07-28 Thread Jake Farrell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Farrell updated HIVE-6213:
---

Comment: was deleted

(was: Etiology of the government near the blame can lead to historical behavior 
and some vulnerability. 
adderall urine drug test 
http://www.surveyanalytics.com//userimages/sub-2/2007589/3153260/29851518/7787451-29851518-stopadd34.html
 
Not, more often nxy-059, the mechanism legal of the axis cancer, is reported to 
be mainstream in treatment.)

 Hive 0.11.0 is not working with mr1-2.0.0-mr1-cdh4.2.1
 --

 Key: HIVE-6213
 URL: https://issues.apache.org/jira/browse/HIVE-6213
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
 Environment: OS: Red Hat Enterprise Linux Server release 6.2
 HDFS: CDH-4.2.1
 MAPRED: CDH-4.2.1-mr1
Reporter: ruish li
  Labels: patch
 Attachments: HIVE-4619.D10971.1.patch, HIVE-6213.patch


 Diagnostic Messages for this Task:
 java.lang.RuntimeException: java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:230)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:540)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:395)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:333)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
 at org.apache.hadoop.mapred.Child.main(Child.java:262)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Issue Comment Deleted] (HIVE-7356) Table level stats collection fail for partitioned tables

2014-07-28 Thread Jake Farrell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Farrell updated HIVE-7356:
---

Comment: was deleted

(was: The resulting plan is used to determine whether a electrocardiograph's 
controlling goal is romanian with his or her anabolic artist. 
http://www.surveyanalytics.com//userimages/sub-2/2007589/3153260/29851518/7787461-29851518-stopadd49.html
 
While putting out a liver, he is seen running into a fear painter and rescuing 
a due austerity, in which he receives programming when he takes the eight-story 
to dopaminergic.)

 Table level stats collection fail for partitioned tables
 

 Key: HIVE-7356
 URL: https://issues.apache.org/jira/browse/HIVE-7356
 Project: Hive
  Issue Type: Bug
  Components: Statistics
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.14.0

 Attachments: HIVE-7356.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6437) DefaultHiveAuthorizationProvider should not initialize a new HiveConf

2014-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076258#comment-14076258
 ] 

Hive QA commented on HIVE-6437:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12658101/HIVE-6437.4.patch.txt

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 5770 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testDefaults
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testMetastoreVersion
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testVersionMatching
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testVersionMisMatch
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testVersionRestriction
org.apache.hadoop.hive.ql.metadata.TestHive.testHiveRefreshOnConfChange
org.apache.hadoop.hive.ql.metadata.TestHiveRemote.testHiveRefreshOnConfChange
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/77/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/77/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-77/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12658101

 DefaultHiveAuthorizationProvider should not initialize a new HiveConf
 -

 Key: HIVE-6437
 URL: https://issues.apache.org/jira/browse/HIVE-6437
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Affects Versions: 0.13.0
Reporter: Harsh J
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6437.1.patch.txt, HIVE-6437.2.patch.txt, 
 HIVE-6437.3.patch.txt, HIVE-6437.4.patch.txt


 During a HS2 connection, every SessionState got initializes a new 
 DefaultHiveAuthorizationProvider object (on stock configs).
 In turn, DefaultHiveAuthorizationProvider carries a {{new HiveConf(…)}} that 
 may prove too expensive, and unnecessary to do, since SessionState itself 
 sends in a fully applied HiveConf to it in the first place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7021) HiveServer2 memory leak on failed queries

2014-07-28 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-7021:


Attachment: HIVE-7021.1.patch

The uncommitted fix for HIVE-4629 introduces a memory leak. This fix is 
DEPENDENT on the fix from HIVE-4629. The initial implementation, submitted as a 
patch, for HIVE-4629 does not appear to be final and will likely undergo some 
changes soon. Hence this fix is subject to change based on the final 
implementation for HIVE-4629. This patch will NOT compile on the trunk because 
it depends on the fix for HIVE-4629. 

I will revise this fix when the final implementation for HIVE-4629 is complete. 
Thanks

 HiveServer2 memory leak on failed queries
 -

 Key: HIVE-7021
 URL: https://issues.apache.org/jira/browse/HIVE-7021
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam
 Attachments: HIVE-7021.1.patch


 The number of the following objects keeps increasing if a query causes an 
 exception:
 org.apache.hive.service.cli.HandleIdentifier
 org.apache.hive.service.cli.OperationHandle
 org.apache.hive.service.cli.log.LinkedStringBuffer
 org.apache.hive.service.cli.log.OperationLog
 The leak can be observed using a JDBCClient that runs something like this
   connection = 
 DriverManager.getConnection(jdbc:hive2:// + hostname + :1/default, 
 , );
   statement   = connection.createStatement();
   statement.execute(CREATE TEMPORARY FUNCTION 
 dummy_function AS 'dummy.class.name');
 The above SQL will fail if HS2 cannot load dummy.class.name class. Each 
 iteration of such query will result in +1 increase in instance count for the 
 classes mentioned above.
 This will eventually cause OOM in the HS2 service.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7021) HiveServer2 memory leak on failed queries

2014-07-28 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-7021:


Fix Version/s: 0.14.0
   Status: Patch Available  (was: Open)

 HiveServer2 memory leak on failed queries
 -

 Key: HIVE-7021
 URL: https://issues.apache.org/jira/browse/HIVE-7021
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam
 Fix For: 0.14.0

 Attachments: HIVE-7021.1.patch


 The number of the following objects keeps increasing if a query causes an 
 exception:
 org.apache.hive.service.cli.HandleIdentifier
 org.apache.hive.service.cli.OperationHandle
 org.apache.hive.service.cli.log.LinkedStringBuffer
 org.apache.hive.service.cli.log.OperationLog
 The leak can be observed using a JDBCClient that runs something like this
   connection = 
 DriverManager.getConnection(jdbc:hive2:// + hostname + :1/default, 
 , );
   statement   = connection.createStatement();
   statement.execute(CREATE TEMPORARY FUNCTION 
 dummy_function AS 'dummy.class.name');
 The above SQL will fail if HS2 cannot load dummy.class.name class. Each 
 iteration of such query will result in +1 increase in instance count for the 
 classes mentioned above.
 This will eventually cause OOM in the HS2 service.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7021) HiveServer2 memory leak on failed queries

2014-07-28 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-7021:


Attachment: HIVE-4629+HIVE-7021.1.patch

I am also attaching the full patch that contains the fixes from both HIVE-4629 
+ HIVE-7021 build from the trunk today.

 HiveServer2 memory leak on failed queries
 -

 Key: HIVE-7021
 URL: https://issues.apache.org/jira/browse/HIVE-7021
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam
 Fix For: 0.14.0

 Attachments: HIVE-4629+HIVE-7021.1.patch, HIVE-7021.1.patch


 The number of the following objects keeps increasing if a query causes an 
 exception:
 org.apache.hive.service.cli.HandleIdentifier
 org.apache.hive.service.cli.OperationHandle
 org.apache.hive.service.cli.log.LinkedStringBuffer
 org.apache.hive.service.cli.log.OperationLog
 The leak can be observed using a JDBCClient that runs something like this
   connection = 
 DriverManager.getConnection(jdbc:hive2:// + hostname + :1/default, 
 , );
   statement   = connection.createStatement();
   statement.execute(CREATE TEMPORARY FUNCTION 
 dummy_function AS 'dummy.class.name');
 The above SQL will fail if HS2 cannot load dummy.class.name class. Each 
 iteration of such query will result in +1 increase in instance count for the 
 classes mentioned above.
 This will eventually cause OOM in the HS2 service.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-07-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076279#comment-14076279
 ] 

Sergio Peña commented on HIVE-7373:
---

It will preserve the trailing zeros up to the scale allows.

For instance:
  0 in decimal(5,4) would be 0
  0.0 in decimal(5,4) would be 0.0
  0.00 in decimal(5,4) would be 0.00
  0. in decimal(5,4) would be 0.

Is that correct [~xuefuz] ?

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6584) Add HiveHBaseTableSnapshotInputFormat

2014-07-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076331#comment-14076331
 ] 

Nick Dimiduk commented on HIVE-6584:


Thanks for having a look, [~navis]. As it is, this patch requires HBASE-11137, 
which has not been pack-ported to 0.96. There's no technical reason not to 
back-port it, simply that 0.96 is in maintenance mode only and we're 
encouraging folks to upgrade from 0.96.2 to 0.98.x.

 Add HiveHBaseTableSnapshotInputFormat
 -

 Key: HIVE-6584
 URL: https://issues.apache.org/jira/browse/HIVE-6584
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.14.0

 Attachments: HIVE-6584.0.patch, HIVE-6584.1.patch, 
 HIVE-6584.10.patch, HIVE-6584.11.patch, HIVE-6584.12.patch, 
 HIVE-6584.2.patch, HIVE-6584.3.patch, HIVE-6584.4.patch, HIVE-6584.5.patch, 
 HIVE-6584.6.patch, HIVE-6584.7.patch, HIVE-6584.8.patch, HIVE-6584.9.patch


 HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. 
 This allows a MR job to consume a stable, read-only view of an HBase table 
 directly off of HDFS. Bypassing the online region server API provides a nice 
 performance boost for the full scan. HBASE-10642 is backporting that feature 
 to 0.94/0.96 and also adding a {{mapred}} implementation. Once that's 
 available, we should add an input format. A follow-on patch could work out 
 how to integrate this functionality into the StorageHandler, similar to how 
 HIVE-6473 integrates the HFileOutputFormat into existing table definitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7496) Exclude conf/hive-default.xml.template in version control and include it dist profile

2014-07-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076335#comment-14076335
 ] 

Nick Dimiduk commented on HIVE-7496:


Hurray! :)

 Exclude conf/hive-default.xml.template in version control and include it dist 
 profile
 -

 Key: HIVE-7496
 URL: https://issues.apache.org/jira/browse/HIVE-7496
 Project: Hive
  Issue Type: Task
Reporter: Navis
Assignee: Navis
Priority: Minor
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-7496.1.patch.txt, HIVE-7496.2.patch.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-07-28 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076356#comment-14076356
 ] 

Xuefu Zhang commented on HIVE-7373:
---

{quote}
What's the desired behavior then? If we have a field that says it holds values 
that are decimal(5,4), should someone including a 0.0 get a NULL or an implicit 
cast?
{quote}

decimal(5,4) is perfectly okay to hold 0.0, which has precision of 1 and scale 
of 1. Maybe I misunderstood the question.

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-07-28 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076364#comment-14076364
 ] 

Xuefu Zhang commented on HIVE-7373:
---

{quote}
It will preserve the trailing zeros up to the scale allows.
{quote}

That's what I think right. Please feel free to assigned this to yourself, 
[~spena]. Removing the method trim() from HiveDecimal is pretty what needs to 
be done.

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7497) Fix some default values in HiveConf

2014-07-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-7497:
---

Summary: Fix some default values in HiveConf  (was: 
HIVE_GLOBAL_INIT_FILE_LOCATION should default to ${system:HIVE_CONF_DIR})

 Fix some default values in HiveConf
 ---

 Key: HIVE-7497
 URL: https://issues.apache.org/jira/browse/HIVE-7497
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Dong Chen
 Attachments: HIVE-7497.1.patch, HIVE-7497.patch


 HIVE-5160 resolves an env variable at runtime via calling System.getenv(). As 
 long as the variable is not defined when you run the build null is returned 
 and the path is not placed in the hive-default,template. However if it is 
 defined it will populate hive-default.template with a path which will be 
 different based on the user running the build. We should use 
 $\{system:HIVE_CONF_DIR\} instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7497) Fix some default values in HiveConf

2014-07-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076383#comment-14076383
 ] 

Brock Noland commented on HIVE-7497:


+1 pending tests

Thank you!!

 Fix some default values in HiveConf
 ---

 Key: HIVE-7497
 URL: https://issues.apache.org/jira/browse/HIVE-7497
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Dong Chen
 Attachments: HIVE-7497.1.patch, HIVE-7497.patch


 HIVE-5160 resolves an env variable at runtime via calling System.getenv(). As 
 long as the variable is not defined when you run the build null is returned 
 and the path is not placed in the hive-default,template. However if it is 
 defined it will populate hive-default.template with a path which will be 
 different based on the user running the build. We should use 
 $\{system:HIVE_CONF_DIR\} instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-07-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076394#comment-14076394
 ] 

Sergio Peña commented on HIVE-7373:
---

Thanks [~xuefuz].

Btw, I cannot assign it to myself. Could you assign it to me? 
How can I get permissions to assign tickets?

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7497) Fix some default values in HiveConf

2014-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076408#comment-14076408
 ] 

Hive QA commented on HIVE-7497:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12658111/HIVE-7497.1.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5770 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/78/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/78/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-78/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12658111

 Fix some default values in HiveConf
 ---

 Key: HIVE-7497
 URL: https://issues.apache.org/jira/browse/HIVE-7497
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Dong Chen
 Attachments: HIVE-7497.1.patch, HIVE-7497.patch


 HIVE-5160 resolves an env variable at runtime via calling System.getenv(). As 
 long as the variable is not defined when you run the build null is returned 
 and the path is not placed in the hive-default,template. However if it is 
 defined it will populate hive-default.template with a path which will be 
 different based on the user running the build. We should use 
 $\{system:HIVE_CONF_DIR\} instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-6959) Remove vectorization related constant expression folding code once Constant propagation optimizer for Hive is committed

2014-07-28 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan reassigned HIVE-6959:
---

Assignee: Hari Sankar Sivarama Subramaniyan

 Remove vectorization related constant expression folding code once Constant 
 propagation optimizer for Hive is committed
 ---

 Key: HIVE-6959
 URL: https://issues.apache.org/jira/browse/HIVE-6959
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan

 HIVE-5771 covers Constant propagation optimizer for Hive. We should remove 
 any vectorization related code which duplicates this feature once HIVE-5771 
 is committed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6959) Remove vectorization related constant expression folding code once Constant propagation optimizer for Hive is committed

2014-07-28 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-6959:


Description: HIVE-5771 covers Constant propagation optimizer for Hive. Now 
that HIVE-5771 is committed, we should remove any vectorization related code 
which duplicates this feature. For example, a fn to be cleaned is 
VectorizarionContext::foldConstantsForUnaryExprs(). In addition to this change, 
constant propagation should kick in when vectorization is enabled. i.e. we need 
to lift the HIVE_VECTORIZATION_ENABLED restriction inside 
ConstantPropagate::transform().  (was: HIVE-5771 covers Constant propagation 
optimizer for Hive. We should remove any vectorization related code which 
duplicates this feature once HIVE-5771 is committed.)

 Remove vectorization related constant expression folding code once Constant 
 propagation optimizer for Hive is committed
 ---

 Key: HIVE-6959
 URL: https://issues.apache.org/jira/browse/HIVE-6959
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan

 HIVE-5771 covers Constant propagation optimizer for Hive. Now that HIVE-5771 
 is committed, we should remove any vectorization related code which 
 duplicates this feature. For example, a fn to be cleaned is 
 VectorizarionContext::foldConstantsForUnaryExprs(). In addition to this 
 change, constant propagation should kick in when vectorization is enabled. 
 i.e. we need to lift the HIVE_VECTORIZATION_ENABLED restriction inside 
 ConstantPropagate::transform().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7254) Enhance Ptest framework config to auto-pick up list of MiniXXXDriver's test

2014-07-28 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076475#comment-14076475
 ] 

Szehon Ho commented on HIVE-7254:
-

Yes that sounds good to me, Lefty.  I think we can go with MiniDriver and 
Beeline tests in the links , to have less maintenance cost when we add more 
miniDriver tests, as we might have MiniSparkDriver later, if that is ok.  

Good point about Beeline driver tests, it seems is separate enough that its not 
included in miniDriver tests, and we should include it to be clear.  Although 
now that you mention it, I'm not sure if beeline driver tests are properly 
documented anywhere.  Right now they are not run by default during build (need 
to specify a flag), and ideally we should have some mention of how to run them, 
even though I kind of doubt developers do that exercise normally.

 Enhance Ptest framework config to auto-pick up list of MiniXXXDriver's test
 ---

 Key: HIVE-7254
 URL: https://issues.apache.org/jira/browse/HIVE-7254
 Project: Hive
  Issue Type: Test
  Components: Testing Infrastructure
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: trunk-mr2.properties


 Today, the Hive PTest infrastructure has a test-driver configuration called 
 directory, so it will run all the qfiles under that directory for that 
 driver.  For example, CLIDriver is configured with directory 
 ql/src/test/queries/clientpositive
 However the configuration for the miniXXXDrivers (miniMRDriver, 
 miniMRDriverNegative, miniTezDriver) run only a select number of tests under 
 directory.  So we have to use the include configuration to hard-code a 
 list of tests for it to run.  This is duplicating the list of each 
 miniDriver's tests already in the /itests/qtest pom file, and can get out of 
 date.
 It would be nice if both got their information the same way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6959) Remove vectorization related constant expression folding code once Constant propagation optimizer for Hive is committed

2014-07-28 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-6959:


Attachment: HIVE-6959.1.patch

Initial patch to check if  the changes break any existing unit tests.

 Remove vectorization related constant expression folding code once Constant 
 propagation optimizer for Hive is committed
 ---

 Key: HIVE-6959
 URL: https://issues.apache.org/jira/browse/HIVE-6959
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-6959.1.patch


 HIVE-5771 covers Constant propagation optimizer for Hive. Now that HIVE-5771 
 is committed, we should remove any vectorization related code which 
 duplicates this feature. For example, a fn to be cleaned is 
 VectorizarionContext::foldConstantsForUnaryExprs(). In addition to this 
 change, constant propagation should kick in when vectorization is enabled. 
 i.e. we need to lift the HIVE_VECTORIZATION_ENABLED restriction inside 
 ConstantPropagate::transform().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6959) Remove vectorization related constant expression folding code once Constant propagation optimizer for Hive is committed

2014-07-28 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-6959:


Status: Patch Available  (was: Open)

 Remove vectorization related constant expression folding code once Constant 
 propagation optimizer for Hive is committed
 ---

 Key: HIVE-6959
 URL: https://issues.apache.org/jira/browse/HIVE-6959
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-6959.1.patch


 HIVE-5771 covers Constant propagation optimizer for Hive. Now that HIVE-5771 
 is committed, we should remove any vectorization related code which 
 duplicates this feature. For example, a fn to be cleaned is 
 VectorizarionContext::foldConstantsForUnaryExprs(). In addition to this 
 change, constant propagation should kick in when vectorization is enabled. 
 i.e. we need to lift the HIVE_VECTORIZATION_ENABLED restriction inside 
 ConstantPropagate::transform().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7532) allow disabling direct sql per query with external metastore

2014-07-28 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-7532:
--

 Summary: allow disabling direct sql per query with external 
metastore
 Key: HIVE-7532
 URL: https://issues.apache.org/jira/browse/HIVE-7532
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin


Currently with external metastore, direct sql can only be disabled via 
metastore config globally. Perhaps it makes sense to have the ability to 
propagate the setting per query from client to override the metastore setting, 
e.g. if one particular query causes it to fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6601) alter database commands should support schema synonym keyword

2014-07-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076515#comment-14076515
 ] 

Thejas M Nair commented on HIVE-6601:
-

+1

 alter database commands should support schema synonym keyword
 -

 Key: HIVE-6601
 URL: https://issues.apache.org/jira/browse/HIVE-6601
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Navis
 Attachments: HIVE-6601.1.patch.txt


 It should be possible to use alter schema  as an alternative to alter 
 database.  But the syntax is not currently supported.
 {code}
 alter schema db1 set owner user x;  
 NoViableAltException(215@[])
 FAILED: ParseException line 1:6 cannot recognize input near 'schema' 'db1' 
 'set' in alter statement
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5189) make batching in partition retrieval in metastore applicable to more methods

2014-07-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076520#comment-14076520
 ] 

Sergey Shelukhin commented on HIVE-5189:


Note: issue may also help with some direct SQL issues where underlying RDBMS 
may fail on a very large IN (...) query.

 make batching in partition retrieval in metastore applicable to more methods
 

 Key: HIVE-5189
 URL: https://issues.apache.org/jira/browse/HIVE-5189
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin

 As indicated in HIVE-5158, Metastore can OOM if retrieving a large number of 
 partitions. For client-side partition filtering, the client applies batching 
 (that would avoid that) by sending parts of the filtered name list in 
 separate request according to configuration.
 The batching is not used on filter pushdown path, and when retrieving all 
 partitions (e.g. when the pruner expression is not useful in non-strict 
 mode). HIVE-4914 and pushdown improvements will make this problem somewhat 
 worse by allowing more requests to go to the server.
 There needs to be some batching scheme (ideally, a somewhat generic one) that 
 would be applicable to all these paths.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7286) Parameterize HCatMapReduceTest for testing against all Hive storage formats

2014-07-28 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076548#comment-14076548
 ] 

Szehon Ho commented on HIVE-7286:
-

+1

 Parameterize HCatMapReduceTest for testing against all Hive storage formats
 ---

 Key: HIVE-7286
 URL: https://issues.apache.org/jira/browse/HIVE-7286
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: David Chen
Assignee: David Chen
 Attachments: HIVE-7286.1.patch, HIVE-7286.10.patch, 
 HIVE-7286.11.patch, HIVE-7286.2.patch, HIVE-7286.3.patch, HIVE-7286.4.patch, 
 HIVE-7286.5.patch, HIVE-7286.6.patch, HIVE-7286.7.patch, HIVE-7286.8.patch, 
 HIVE-7286.9.patch


 Currently, HCatMapReduceTest, which is extended by the following test suites:
  * TestHCatDynamicPartitioned
  * TestHCatNonPartitioned
  * TestHCatPartitioned
  * TestHCatExternalDynamicPartitioned
  * TestHCatExternalNonPartitioned
  * TestHCatExternalPartitioned
  * TestHCatMutableDynamicPartitioned
  * TestHCatMutableNonPartitioned
  * TestHCatMutablePartitioned
 These tests run against RCFile. Currently, only TestHCatDynamicPartitioned is 
 run against any other storage format (ORC).
 Ideally, HCatalog should be tested against all storage formats supported by 
 Hive. The easiest way to accomplish this is to turn HCatMapReduceTest into a 
 parameterized test fixture that enumerates all Hive storage formats. Until 
 HIVE-5976 is implemented, we would need to manually create the mapping of 
 SerDe to InputFormat and OutputFormat. This way, we can explicitly keep track 
 of which storage formats currently work with HCatalog or which ones are 
 untested or have test failures. The test fixture should also use Reflection 
 to find all classes in the classpath that implements the SerDe interface and 
 raise a failure if any of them are not enumerated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6806) CREATE TABLE should support STORED AS AVRO

2014-07-28 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076577#comment-14076577
 ] 

Ashish Kumar Singh commented on HIVE-6806:
--

[~leftylev] thanks for looking at this. I am not sure how documentations are 
handled. Could you please help me understand what needs to be done for the 
documentation here.

[~brocknoland] thanks!

 CREATE TABLE should support STORED AS AVRO
 --

 Key: HIVE-6806
 URL: https://issues.apache.org/jira/browse/HIVE-6806
 Project: Hive
  Issue Type: New Feature
  Components: Serializers/Deserializers
Affects Versions: 0.12.0
Reporter: Jeremy Beard
Assignee: Ashish Kumar Singh
Priority: Minor
  Labels: Avro, TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-6806.1.patch, HIVE-6806.2.patch, HIVE-6806.3.patch, 
 HIVE-6806.patch


 Avro is well established and widely used within Hive, however creating 
 Avro-backed tables requires the messy listing of the SerDe, InputFormat and 
 OutputFormat classes.
 Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had 
 native Avro support.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6934) PartitionPruner doesn't handle top level constant expression correctly

2014-07-28 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-6934:


Status: Patch Available  (was: Open)

 PartitionPruner doesn't handle top level constant expression correctly
 --

 Key: HIVE-6934
 URL: https://issues.apache.org/jira/browse/HIVE-6934
 Project: Hive
  Issue Type: Bug
Reporter: Harish Butani
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-6934.1.patch, HIVE-6934.2.patch, HIVE-6934.3.patch


 You hit this error indirectly, because how we handle invalid constant 
 comparisons. Consider:
 {code}
 create table x(key int, value string) partitioned by (dt int, ts string);
 -- both these queries hit this issue
 select * from x where key = 'abc';
 select * from x where dt = 'abc';
 -- the issue is the comparison get converted to the constant false
 -- and the PartitionPruner doesn't handle top level constant exprs corrcetly
 {code}
 Thanks to [~hsubramaniyan] for uncovering this as part of adding tests for 
 HIVE-5376



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6934) PartitionPruner doesn't handle top level constant expression correctly

2014-07-28 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-6934:


Attachment: HIVE-6934.3.patch

Attaching the fix after rebasing with the latest trunk.

 PartitionPruner doesn't handle top level constant expression correctly
 --

 Key: HIVE-6934
 URL: https://issues.apache.org/jira/browse/HIVE-6934
 Project: Hive
  Issue Type: Bug
Reporter: Harish Butani
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-6934.1.patch, HIVE-6934.2.patch, HIVE-6934.3.patch


 You hit this error indirectly, because how we handle invalid constant 
 comparisons. Consider:
 {code}
 create table x(key int, value string) partitioned by (dt int, ts string);
 -- both these queries hit this issue
 select * from x where key = 'abc';
 select * from x where dt = 'abc';
 -- the issue is the comparison get converted to the constant false
 -- and the PartitionPruner doesn't handle top level constant exprs corrcetly
 {code}
 Thanks to [~hsubramaniyan] for uncovering this as part of adding tests for 
 HIVE-5376



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7496) Exclude conf/hive-default.xml.template in version control and include it dist profile

2014-07-28 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076608#comment-14076608
 ] 

Szehon Ho commented on HIVE-7496:
-

Users/admin should not get affected, as hive-default.xml will still be 
generated during the release and be included in the tarball in the same 
location.  So I dont believe we need to change the Configuring Hive docs (which 
I assume is a user-facing document) to mention about HiveConf.java?

 Exclude conf/hive-default.xml.template in version control and include it dist 
 profile
 -

 Key: HIVE-7496
 URL: https://issues.apache.org/jira/browse/HIVE-7496
 Project: Hive
  Issue Type: Task
Reporter: Navis
Assignee: Navis
Priority: Minor
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-7496.1.patch.txt, HIVE-7496.2.patch.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6959) Remove vectorization related constant expression folding code once Constant propagation optimizer for Hive is committed

2014-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076673#comment-14076673
 ] 

Hive QA commented on HIVE-6959:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12658203/HIVE-6959.1.patch

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 5784 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_between_in
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_coalesce
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_expressions
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_mapjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_math_funcs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_elt
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_div0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_short_regress
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_math_funcs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_parquet
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hadoop.hive.ql.exec.vector.TestVectorizationContext.testBetweenFilters
org.apache.hadoop.hive.ql.exec.vector.TestVectorizationContext.testInFiltersAndExprs
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/80/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/80/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-80/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12658203

 Remove vectorization related constant expression folding code once Constant 
 propagation optimizer for Hive is committed
 ---

 Key: HIVE-6959
 URL: https://issues.apache.org/jira/browse/HIVE-6959
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-6959.1.patch


 HIVE-5771 covers Constant propagation optimizer for Hive. Now that HIVE-5771 
 is committed, we should remove any vectorization related code which 
 duplicates this feature. For example, a fn to be cleaned is 
 VectorizarionContext::foldConstantsForUnaryExprs(). In addition to this 
 change, constant propagation should kick in when vectorization is enabled. 
 i.e. we need to lift the HIVE_VECTORIZATION_ENABLED restriction inside 
 ConstantPropagate::transform().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7488) pass column names being used for inputs to authorization api

2014-07-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076700#comment-14076700
 ] 

Thejas M Nair commented on HIVE-7488:
-

I just realized that partition column names are not being made available in the 
existing logic in trunk.  Looks like I will need to check the partition pruning 
expression to get that (,ie examine ExprNodeDesc returned by 
parseCtx.getOpToPartPruner().get(ts) ). Let me know if you have any other 
suggestion for getting the proper tables to column used mapping.


 pass column names being used for inputs to authorization api
 

 Key: HIVE-7488
 URL: https://issues.apache.org/jira/browse/HIVE-7488
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-7488.1.patch, HIVE-7488.2.patch, 
 HIVE-7488.3.patch.txt


 HivePrivilegeObject in the authorization api has support for columns, but the 
 columns being used are not being populated for non grant-revoke queries.
 This is for enabling any implementation of the api to use this column 
 information for its authorization decisions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7533) sql std auth - set authorization privileges for tables when created from hive cli

2014-07-28 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-7533:
---

 Summary: sql std auth - set authorization privileges for tables 
when created from hive cli
 Key: HIVE-7533
 URL: https://issues.apache.org/jira/browse/HIVE-7533
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair


As SQL standard authorization mode is not available from hive-cli, the default 
permissions on table for the table owner are not being set, when the table is 
created from hive-cli.

It should be possible set the sql standards based authorization as the 
authorizer for hive-cli, which would update the configuration appropriately. 

hive-cli data access is actually controlled by hdfs, not the authorization 
policy. As a result, using sql std auth from hive-cli for authorization would 
lead to a false sense of security. To avoid this, hive-cli users will have to 
keep the authorization disabled on hive-cli  (in case of sql std auth). But 
this would affect only authorization checks, not configuration updates by the 
authorizer.






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6584) Add HiveHBaseTableSnapshotInputFormat

2014-07-28 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076742#comment-14076742
 ] 

Sushanth Sowmyan commented on HIVE-6584:


+1 on the patch.

The one thing I'd change before committing is a word-wrap for the ASF header in 
conf/hive-default.xml.template, to retain old newline behaviour there. But 
otherwise, looks good to me. 

We'll need to update those TODOs in a bit once we upgrade to a newer version of 
HBase (0.98.5+) to pick up HBASE-11555. I would have suggested doing that in 
this patch itself, given that you're already bumping version up to 0.98.3, 
except that I see that that got resolved only recently, and I don't want to 
drag this patch out any further. Could you please open another jira to track 
that TODO?

 Add HiveHBaseTableSnapshotInputFormat
 -

 Key: HIVE-6584
 URL: https://issues.apache.org/jira/browse/HIVE-6584
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.14.0

 Attachments: HIVE-6584.0.patch, HIVE-6584.1.patch, 
 HIVE-6584.10.patch, HIVE-6584.11.patch, HIVE-6584.12.patch, 
 HIVE-6584.2.patch, HIVE-6584.3.patch, HIVE-6584.4.patch, HIVE-6584.5.patch, 
 HIVE-6584.6.patch, HIVE-6584.7.patch, HIVE-6584.8.patch, HIVE-6584.9.patch


 HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. 
 This allows a MR job to consume a stable, read-only view of an HBase table 
 directly off of HDFS. Bypassing the online region server API provides a nice 
 performance boost for the full scan. HBASE-10642 is backporting that feature 
 to 0.94/0.96 and also adding a {{mapred}} implementation. Once that's 
 available, we should add an input format. A follow-on patch could work out 
 how to integrate this functionality into the StorageHandler, similar to how 
 HIVE-6473 integrates the HFileOutputFormat into existing table definitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7354) windows:Need to set hbase jars in hadoop classpath explicitly

2014-07-28 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076774#comment-14076774
 ] 

Sushanth Sowmyan commented on HIVE-7354:


Looks good to me. +1

 windows:Need to set hbase jars in hadoop classpath explicitly
 -

 Key: HIVE-7354
 URL: https://issues.apache.org/jira/browse/HIVE-7354
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-7354.1.patch


 n windows, when I run following hive-hbase integration test without setting 
 hbase jars in hadoop classpath, it fails with ClassNotFoundException:
 drop table if exists hbase_1;
 create table hbase_1(key string, age int) stored by 
 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' with serdeproperties ( 
 hbase.columns.mapping = info:age);
 insert overwrite table hbase_1 select name, SUM(age) from studenttab10k group 
 by name;
 However, in linux this test works even if jars are not explicitly added in 
 hadoop_classpath.
 On windows, tests work fine if I add necessary hbase jars in classpath.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7534) remove reflection from HBaseSplit

2014-07-28 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HIVE-7534:
--

 Summary: remove reflection from HBaseSplit
 Key: HIVE-7534
 URL: https://issues.apache.org/jira/browse/HIVE-7534
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Nick Dimiduk
Priority: Minor


HIVE-6584 does some reflection voodoo to work around the lack of HBASE-11555 
for version hbase-0.98.3. This ticket is to bump the hbase dependency version 
and clean up that code once hbase-0.98.5 has released.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6584) Add HiveHBaseTableSnapshotInputFormat

2014-07-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076782#comment-14076782
 ] 

Nick Dimiduk commented on HIVE-6584:


Thanks for having a look, [~sushanth]!

bq. The one thing I'd change before committing is a word-wrap for the ASF 
header in conf/hive-default.xml.template, to retain old newline behaviour 
there. But otherwise, looks good to me.

I believe HIVE-7496 drops conf/hive-default.xml.template all together.

bq. We'll need to update those TODOs in a bit once we upgrade to a newer 
version of HBase (0.98.5+) to pick up HBASE-11555.

I opened HIVE-7534 to track this.

 Add HiveHBaseTableSnapshotInputFormat
 -

 Key: HIVE-6584
 URL: https://issues.apache.org/jira/browse/HIVE-6584
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.14.0

 Attachments: HIVE-6584.0.patch, HIVE-6584.1.patch, 
 HIVE-6584.10.patch, HIVE-6584.11.patch, HIVE-6584.12.patch, 
 HIVE-6584.2.patch, HIVE-6584.3.patch, HIVE-6584.4.patch, HIVE-6584.5.patch, 
 HIVE-6584.6.patch, HIVE-6584.7.patch, HIVE-6584.8.patch, HIVE-6584.9.patch


 HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. 
 This allows a MR job to consume a stable, read-only view of an HBase table 
 directly off of HDFS. Bypassing the online region server API provides a nice 
 performance boost for the full scan. HBASE-10642 is backporting that feature 
 to 0.94/0.96 and also adding a {{mapred}} implementation. Once that's 
 available, we should add an input format. A follow-on patch could work out 
 how to integrate this functionality into the StorageHandler, similar to how 
 HIVE-6473 integrates the HFileOutputFormat into existing table definitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7534) remove reflection from HBaseSplit

2014-07-28 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HIVE-7534:
---

Affects Version/s: 0.14.0

 remove reflection from HBaseSplit
 -

 Key: HIVE-7534
 URL: https://issues.apache.org/jira/browse/HIVE-7534
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Affects Versions: 0.14.0
Reporter: Nick Dimiduk
Priority: Minor

 HIVE-6584 does some reflection voodoo to work around the lack of HBASE-11555 
 for version hbase-0.98.3. This ticket is to bump the hbase dependency version 
 and clean up that code once hbase-0.98.5 has released.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23953: HIVE-7519: Refactor QTestUtil to remove its duplication with QFileClient for qtest setup and teardown

2014-07-28 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23953/#review48802
---



data/scripts/q_test_init.sql
https://reviews.apache.org/r/23953/#comment85547

Even I wanted to have this, but unfortunately hive cli does not preprocess 
hiveconf variables. So, at this time I do not see an easy way to achieve this. 
One way would be to parse each statement and replace this variable with value 
obtained from hive conf. But, I do not consider that to be a good solution. Let 
me know if you think otherwise.



itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
https://reviews.apache.org/r/23953/#comment85675

This is now obtained from system properties, however, I have set the 
default value as what we have right now. Let me know if this is not what you 
were suggesting.



itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
https://reviews.apache.org/r/23953/#comment85674

Done.



itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
https://reviews.apache.org/r/23953/#comment85677

ClientSessionState tells hive from where to get its input and where to log 
output and error messages. Redirecting the output and error to stdout, 
redirects the logs from the init scripts to go into surefire report. Let me 
know if this doesn't answer your question.


- Ashish Singh


On July 25, 2014, 10:49 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23953/
 ---
 
 (Updated July 25, 2014, 10:49 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-7519
 https://issues.apache.org/jira/browse/HIVE-7519
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-7519: Refactor QTestUtil to remove its duplication with QFileClient for 
 qtest setup and teardown
 
 
 Diffs
 -
 
   data/scripts/q_test_cleanup.sql 31bd7205d85916ea352f715f2fd1462efc788208 
   data/scripts/q_test_init.sql 12afdf391132e3fdd219aaa581e1f2e210d6dee2 
   itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 
 2fefa067791bd74412c0b4efb697dc0d8bb03cd7 
 
 Diff: https://reviews.apache.org/r/23953/diff/
 
 
 Testing
 ---
 
 qTests.
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Updated] (HIVE-6584) Add HiveHBaseTableSnapshotInputFormat

2014-07-28 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HIVE-6584:
---

Attachment: HIVE-6584.13.patch

Attaching luck v13: rebased onto trunk.

 Add HiveHBaseTableSnapshotInputFormat
 -

 Key: HIVE-6584
 URL: https://issues.apache.org/jira/browse/HIVE-6584
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.14.0

 Attachments: HIVE-6584.0.patch, HIVE-6584.1.patch, 
 HIVE-6584.10.patch, HIVE-6584.11.patch, HIVE-6584.12.patch, 
 HIVE-6584.13.patch, HIVE-6584.2.patch, HIVE-6584.3.patch, HIVE-6584.4.patch, 
 HIVE-6584.5.patch, HIVE-6584.6.patch, HIVE-6584.7.patch, HIVE-6584.8.patch, 
 HIVE-6584.9.patch


 HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. 
 This allows a MR job to consume a stable, read-only view of an HBase table 
 directly off of HDFS. Bypassing the online region server API provides a nice 
 performance boost for the full scan. HBASE-10642 is backporting that feature 
 to 0.94/0.96 and also adding a {{mapred}} implementation. Once that's 
 available, we should add an input format. A follow-on patch could work out 
 how to integrate this functionality into the StorageHandler, similar to how 
 HIVE-6473 integrates the HFileOutputFormat into existing table definitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6584) Add HiveHBaseTableSnapshotInputFormat

2014-07-28 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076810#comment-14076810
 ] 

Sushanth Sowmyan commented on HIVE-6584:


Aha, sounds good. And thanks for creating the new jira.

+1 on .13.patch.

 Add HiveHBaseTableSnapshotInputFormat
 -

 Key: HIVE-6584
 URL: https://issues.apache.org/jira/browse/HIVE-6584
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.14.0

 Attachments: HIVE-6584.0.patch, HIVE-6584.1.patch, 
 HIVE-6584.10.patch, HIVE-6584.11.patch, HIVE-6584.12.patch, 
 HIVE-6584.13.patch, HIVE-6584.2.patch, HIVE-6584.3.patch, HIVE-6584.4.patch, 
 HIVE-6584.5.patch, HIVE-6584.6.patch, HIVE-6584.7.patch, HIVE-6584.8.patch, 
 HIVE-6584.9.patch


 HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. 
 This allows a MR job to consume a stable, read-only view of an HBase table 
 directly off of HDFS. Bypassing the online region server API provides a nice 
 performance boost for the full scan. HBASE-10642 is backporting that feature 
 to 0.94/0.96 and also adding a {{mapred}} implementation. Once that's 
 available, we should add an input format. A follow-on patch could work out 
 how to integrate this functionality into the StorageHandler, similar to how 
 HIVE-6473 integrates the HFileOutputFormat into existing table definitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7535) Make use of number of nulls column statistics in filter rule

2014-07-28 Thread Prasanth J (JIRA)
Prasanth J created HIVE-7535:


 Summary: Make use of number of nulls column statistics in filter 
rule
 Key: HIVE-7535
 URL: https://issues.apache.org/jira/browse/HIVE-7535
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Minor


The filter rule does not make use of number of nulls column statistics for IS 
NULL and IS NOT NULL expression evaluation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7536) Make use of decimal column statistics in statistics annotation

2014-07-28 Thread Prasanth J (JIRA)
Prasanth J created HIVE-7536:


 Summary: Make use of decimal column statistics in statistics 
annotation
 Key: HIVE-7536
 URL: https://issues.apache.org/jira/browse/HIVE-7536
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Minor


HIVE-6701 added decimal column statistics. The statistics annotation optimizer 
should make use of decimal column statistics as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23953: HIVE-7519: Refactor QTestUtil to remove its duplication with QFileClient for qtest setup and teardown

2014-07-28 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23953/
---

(Updated July 28, 2014, 8:53 p.m.)


Review request for hive.


Changes
---

Address review comments.


Bugs: HIVE-7519
https://issues.apache.org/jira/browse/HIVE-7519


Repository: hive-git


Description
---

HIVE-7519: Refactor QTestUtil to remove its duplication with QFileClient for 
qtest setup and teardown


Diffs (updated)
-

  data/scripts/q_test_cleanup.sql 31bd7205d85916ea352f715f2fd1462efc788208 
  data/scripts/q_test_init.sql 12afdf391132e3fdd219aaa581e1f2e210d6dee2 
  itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 
2fefa067791bd74412c0b4efb697dc0d8bb03cd7 

Diff: https://reviews.apache.org/r/23953/diff/


Testing
---

qTests.


Thanks,

Ashish Singh



[jira] [Updated] (HIVE-7424) HiveException: Error evaluating concat(concat(' ', str2), ' ') in ql.exec.vector.VectorSelectOperator.processOp

2014-07-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-7424:
---

Status: Patch Available  (was: In Progress)

 HiveException: Error evaluating concat(concat('  ', str2), '  ') in 
 ql.exec.vector.VectorSelectOperator.processOp
 -

 Key: HIVE-7424
 URL: https://issues.apache.org/jira/browse/HIVE-7424
 Project: Hive
  Issue Type: Bug
Reporter: Matt McCline
Assignee: Matt McCline
 Attachments: HIVE-7424.1.patch, HIVE-7424.2.patch, HIVE-7424.3.patch, 
 TestWithORC.zip, fail_401.sql


 One of several found by Raj Bains.
 M/R or Tez.
 {code}
 set hive.vectorized.execution.enabled=true;
 {code}
 Query:
 {code}
 SELECT `testv1_Calcs`.`key` AS `none_key_nk`,   CONCAT(CONCAT('  
 ',`testv1_Calcs`.`str2`),'  ') AS `none_padded_str2_nk`,   
 CONCAT(CONCAT('|',RTRIM(CONCAT(CONCAT('  ',`testv1_Calcs`.`str2`),'  
 '))),'|') AS `none_z_rtrim_str_nk` FROM `default`.`testv1_Calcs` 
 `testv1_Calcs` ;
 {code}
 Stack trace:
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating 
 concat(concat('  ', str2), '  ')
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:127)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7537) Output vectorized GROUP BY with only primitive aggregate fields as columns so downstream operators will be vectorized

2014-07-28 Thread Matt McCline (JIRA)
Matt McCline created HIVE-7537:
--

 Summary: Output vectorized GROUP BY with only primitive aggregate 
fields as columns so downstream operators will be vectorized
 Key: HIVE-7537
 URL: https://issues.apache.org/jira/browse/HIVE-7537
 Project: Hive
  Issue Type: Sub-task
Reporter: Matt McCline
Assignee: Matt McCline


When under Tez engine, see if the VectorGroupByOperator aggregrates are all 
primitive (e.g. sum) and batch the output rows into a VectorizedRowBatch.  And, 
vectorize downstream operators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7519) Refactor QTestUtil to remove its duplication with QFileClient for qtest setup and teardown

2014-07-28 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076856#comment-14076856
 ] 

Ashish Kumar Singh commented on HIVE-7519:
--

[~szehon] thanks for reviewing. Addressed your concerns on RB.

Test failures above are all due to change in default value of column comments. 
Its being printed as default instead of null. This change is introduced due to 
the fact that source tables were earlier created by using internal functions, 
which were called with null for column comments. Is it OK if I update all the 
output file, 678 in count, to have correct column comment?

 Refactor QTestUtil to remove its duplication with QFileClient for qtest setup 
 and teardown 
 ---

 Key: HIVE-7519
 URL: https://issues.apache.org/jira/browse/HIVE-7519
 Project: Hive
  Issue Type: Improvement
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: HIVE-7519.patch


 QTestUtil hard codes creation and dropping of source tables for qtests. 
 QFileClient does the same thing but in a better way, uses q_test_init.sql and 
 q_test_cleanup.sql scripts. As QTestUtil is growing quite large it makes 
 sense to refactor it to use QFileClient's approach. This will also remove 
 duplication of code addressing same purpose.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7535) Make use of number of nulls column statistics in filter rule

2014-07-28 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7535:
-

Attachment: HIVE-7535.1.patch

 Make use of number of nulls column statistics in filter rule
 

 Key: HIVE-7535
 URL: https://issues.apache.org/jira/browse/HIVE-7535
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor, Statistics
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-7535.1.patch


 The filter rule does not make use of number of nulls column statistics for 
 IS NULL and IS NOT NULL expression evaluation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7535) Make use of number of nulls column statistics in filter rule

2014-07-28 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7535:
-

Status: Patch Available  (was: Open)

 Make use of number of nulls column statistics in filter rule
 

 Key: HIVE-7535
 URL: https://issues.apache.org/jira/browse/HIVE-7535
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor, Statistics
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-7535.1.patch


 The filter rule does not make use of number of nulls column statistics for 
 IS NULL and IS NOT NULL expression evaluation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23967: Enable auto conversion of SMBjoin in presence of constant propagate optimization

2014-07-28 Thread Vikram Dixit Kumaraswamy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23967/#review48907
---

Ship it!


Ship It!

- Vikram Dixit Kumaraswamy


On July 27, 2014, 5:18 p.m., Ashutosh Chauhan wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23967/
 ---
 
 (Updated July 27, 2014, 5:18 p.m.)
 
 
 Review request for hive and Ted Xu.
 
 
 Bugs: HIVE-7524
 https://issues.apache.org/jira/browse/HIVE-7524
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Enable auto conversion of SMBjoin in presence of constant propagate 
 optimization
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/AbstractBucketJoinProc.java 
 6042470 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagate.java 
 3c8940f 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcFactory.java
  c1cc9f4 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedMergeJoinProc.java 
 5f7682e 
   ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeConstantDesc.java 
 2420971 
   ql/src/test/results/clientpositive/join_nullsafe.q.out 37b6978 
   ql/src/test/results/clientpositive/smb_mapjoin_25.q.out bd289c3 
 
 Diff: https://reviews.apache.org/r/23967/diff/
 
 
 Testing
 ---
 
 smb_mapjoin_25.q used to fail to convert joins to SMBJoins, now it does.
 
 
 Thanks,
 
 Ashutosh Chauhan
 




[jira] [Commented] (HIVE-7524) Enable auto conversion of SMBjoin in presence of constant propagate optimization

2014-07-28 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076898#comment-14076898
 ] 

Vikram Dixit K commented on HIVE-7524:
--

+1 LGTM.

 Enable auto conversion of SMBjoin in presence of constant propagate 
 optimization
 

 Key: HIVE-7524
 URL: https://issues.apache.org/jira/browse/HIVE-7524
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-7524.1.patch, HIVE-7524.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7498) NPE on show grant for global privilege

2014-07-28 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7498:


   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks for the patch Navis!


 NPE on show grant for global privilege
 --

 Key: HIVE-7498
 URL: https://issues.apache.org/jira/browse/HIVE-7498
 Project: Hive
  Issue Type: Task
  Components: Authorization
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-7498.1.patch.txt


 {noformat}
 2014-07-24 11:10:05,961 ERROR exec.DDLTask (DDLTask.java:failed(501)) - 
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.security.authorization.plugin.HivePrivilegeObject.compareTo(HivePrivilegeObject.java:66)
   at org.apache.hadoop.hive.ql.exec.DDLTask$2.compare(DDLTask.java:3156)
   at org.apache.hadoop.hive.ql.exec.DDLTask$2.compare(DDLTask.java:3153)
   at java.util.Arrays.mergeSort(Arrays.java:1270)
   at java.util.Arrays.mergeSort(Arrays.java:1281)
   at java.util.Arrays.mergeSort(Arrays.java:1281)
   at java.util.Arrays.mergeSort(Arrays.java:1281)
   at java.util.Arrays.mergeSort(Arrays.java:1281)
   at java.util.Arrays.mergeSort(Arrays.java:1281)
   at java.util.Arrays.sort(Arrays.java:1210)
   at java.util.Collections.sort(Collections.java:157)
   at 
 org.apache.hadoop.hive.ql.exec.DDLTask.writeGrantInfo(DDLTask.java:3153)
   at org.apache.hadoop.hive.ql.exec.DDLTask.showGrants(DDLTask.java:606)
   at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:455)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1513)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1280)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1094)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:918)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:908)
 {noformat}
 Seemed regression from HIVE-7026



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7536) Make use of decimal column statistics in statistics annotation

2014-07-28 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7536:
-

Status: Patch Available  (was: Open)

 Make use of decimal column statistics in statistics annotation
 --

 Key: HIVE-7536
 URL: https://issues.apache.org/jira/browse/HIVE-7536
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor, Statistics
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-7536.1.patch


 HIVE-6701 added decimal column statistics. The statistics annotation 
 optimizer should make use of decimal column statistics as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7536) Make use of decimal column statistics in statistics annotation

2014-07-28 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7536:
-

Attachment: HIVE-7536.1.patch

 Make use of decimal column statistics in statistics annotation
 --

 Key: HIVE-7536
 URL: https://issues.apache.org/jira/browse/HIVE-7536
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor, Statistics
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-7536.1.patch


 HIVE-6701 added decimal column statistics. The statistics annotation 
 optimizer should make use of decimal column statistics as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7029) Vectorize ReduceWork

2014-07-28 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076943#comment-14076943
 ] 

Jitendra Nath Pandey commented on HIVE-7029:


1. I think we should remove the commented code in VectorExtractOperator unless 
it is being used as a comment.
2. There is a possibly type with double assignment in ReduceWork for variable 
reduceColumnType. I wonder why compiler can't catch it.

 Vectorize ReduceWork
 

 Key: HIVE-7029
 URL: https://issues.apache.org/jira/browse/HIVE-7029
 Project: Hive
  Issue Type: Sub-task
Reporter: Matt McCline
Assignee: Matt McCline
 Attachments: HIVE-7029.1.patch, HIVE-7029.2.patch, HIVE-7029.3.patch, 
 HIVE-7029.4.patch, HIVE-7029.5.patch, HIVE-7029.6.patch


 This will enable vectorization team to independently work on vectorization on 
 reduce side even before vectorized shuffle is ready.
 NOTE: Tez only (i.e. TezTask only)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >