[jira] [Commented] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-20 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283575#comment-14283575
 ] 

Lefty Leverenz commented on HIVE-9395:
--

+1 for configuration parameters

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9395.1-spark.patch, HIVE-9395.2-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9357) Create ADD_MONTHS UDF

2015-01-20 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-9357:
-
Labels: TODOC15  (was: )

 Create ADD_MONTHS UDF
 -

 Key: HIVE-9357
 URL: https://issues.apache.org/jira/browse/HIVE-9357
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-9357.1.patch, HIVE-9357.2.patch, HIVE-9357.3.patch


 ADD_MONTHS adds a number of months to startdate: 
 add_months('2015-01-14', 1) = '2015-02-14'
 add_months('2015-01-31', 1) = '2015-02-28'
 add_months('2015-02-28', 2) = '2015-04-30'
 add_months('2015-02-28', 12) = '2016-02-29'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9264) Merge encryption branch to trunk

2015-01-20 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-9264:
-
Labels: TODOC15  (was: )

 Merge encryption branch to trunk
 

 Key: HIVE-9264
 URL: https://issues.apache.org/jira/browse/HIVE-9264
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-9264.1.patch, HIVE-9264.2.patch, HIVE-9264.2.patch, 
 HIVE-9264.2.patch, HIVE-9264.3.patch, HIVE-9264.3.patch, HIVE-9264.3.patch, 
 HIVE-9264.addendum.patch


 The team working on the encryption branch would like to merge their work to 
 trunk. This jira will track that effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9264) Merge encryption branch to trunk

2015-01-20 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283636#comment-14283636
 ] 

Lefty Leverenz commented on HIVE-9264:
--

Doc note:  This adds configuration parameters *hive.exec.stagingdir* and 
*hive.exec.copyfile.maxsize* to HiveConf.java, so they need to be documented in 
the wiki.

* [Configuration Properties -- Query and DDL Execution | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-QueryandDDLExecution]

General user documentation is also needed.  Should it go into a new wikidoc, or 
can it be added to an existing doc?  So far, the only mention of encryption in 
the wiki is in Setting Up HiveServer2:

* [Setting Up HiveServer2 -- SSL Encryption | 
https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-SSLEncryption]

A release note would also be helpful.

 Merge encryption branch to trunk
 

 Key: HIVE-9264
 URL: https://issues.apache.org/jira/browse/HIVE-9264
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-9264.1.patch, HIVE-9264.2.patch, HIVE-9264.2.patch, 
 HIVE-9264.2.patch, HIVE-9264.3.patch, HIVE-9264.3.patch, HIVE-9264.3.patch, 
 HIVE-9264.addendum.patch


 The team working on the encryption branch would like to merge their work to 
 trunk. This jira will track that effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283594#comment-14283594
 ] 

Hive QA commented on HIVE-9327:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693200/HIVE-9327.01.patch

{color:red}ERROR:{color} -1 due to 293 failed/errored test(s), 7332 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_join_pkfk
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join18
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join18_multi_distinct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cluster
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_column_access_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view_translate
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_product_check_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_product_check_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_opt_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_explain_logical
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_explain_rearrange
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_gby_star
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby1_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby2_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby4_noskew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby6_map
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby6_map_skew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby6_noskew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby7_noskew_multi_single_reducer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_complex_types
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_complex_types_multi_single_reducer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_window
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_position
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_skew_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_identity_project_remove_skip
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_update
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input38
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join18
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join18_multi_distinct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join19
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join29
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join31

[jira] [Commented] (HIVE-9402) Create GREATEST and LEAST udf

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283596#comment-14283596
 ] 

Hive QA commented on HIVE-9402:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693233/HIVE-9402.4.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2437/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2437/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2437/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2437/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcCtx.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/AbstractSMBJoinProc.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/SkewJoinOptimizer.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcFactory.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinTaskDispatcher.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SortMergeJoinTaskDispatcher.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/SparkMapJoinProcessor.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagate.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ReduceSinkMapJoinProc.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/CorrelationOptimizer.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/QueryPlanTreeTransformation.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/CorrelationUtilities.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedDynPartitionOptimizer.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/NonBlockingOpDeDupProc.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/ExprProcCtx.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/ExprProcFactory.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/OpProcFactory.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteGBUsingIndex.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndexCtx.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPruner.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcCtx.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/ppd/ExprWalkerInfo.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/ppd/ExprWalkerProcFactory.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/ppd/OpWalkerInfo.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/ppd/SyntheticJoinPredicate.java'
Reverted 

[jira] [Commented] (HIVE-9357) Create ADD_MONTHS UDF

2015-01-20 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283595#comment-14283595
 ] 

Lefty Leverenz commented on HIVE-9357:
--

Doc note:  This should be documented in the Date Functions section of the UDFs 
wikidoc (with version information and a link to this issue).

* [Hive Operators and UDFs -- Date Functions | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions]

A release note would also be helpful.  Does the description show the committed 
functionality, or was it modified after the discussion?

 Create ADD_MONTHS UDF
 -

 Key: HIVE-9357
 URL: https://issues.apache.org/jira/browse/HIVE-9357
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-9357.1.patch, HIVE-9357.2.patch, HIVE-9357.3.patch


 ADD_MONTHS adds a number of months to startdate: 
 add_months('2015-01-14', 1) = '2015-02-14'
 add_months('2015-01-31', 1) = '2015-02-28'
 add_months('2015-02-28', 2) = '2015-04-30'
 add_months('2015-02-28', 12) = '2016-02-29'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-4663) Needlessly adding analytical windowing columns to my select

2015-01-20 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-4663:

Component/s: PTF-Windowing

 Needlessly adding analytical windowing columns to my select
 ---

 Key: HIVE-4663
 URL: https://issues.apache.org/jira/browse/HIVE-4663
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing, SQL
Affects Versions: 0.11.0
Reporter: Frans Drijver

 Forgive the rather cryptic title, but I was unsure what the best summary 
 would be. The situation is as follows:
 If I have query in which I do both a select of a 'normal' column and an 
 analytical function, as so:
 {quote}
 select distinct 
 kastr.DELOGCE
 , lag(kastr.DEWNKNR) over ( partition by kastr.DEKTRNR order by 
 kastr.DETRADT, kastr.DEVPDNR )
 from RTAVP_DRKASTR kastr
 ;
 {quote}
 I get the following error:
 {quote}
 FAILED: SemanticException Failed to breakup Windowing invocations into 
 Groups. At least 1 group must only depend on input columns. Also check for 
 circular dependencies.
 Underlying error: org.apache.hadoop.hive.ql.parse.SemanticException: Line 
 3:41 Expression not in GROUP BY key 'DEKTRNR'
 {quote}
 The way around is to also put the analytical windowing columns in my select, 
 as such:
 {quote}
 select distinct 
 kastr.DELOGCE
 , lag(kastr.DEWNKNR) over ( partition by kastr.DEKTRNR order by 
 kastr.DETRADT, kastr.DEVPDNR )
 , kastr.DEKTRNR
 , kastr.DEWNKNR
 , kastr.DETRADT
 , kastr.DEVPDNR
 from RTAVP_DRKASTR kastr
 ;
 {quote}
 Obviously this is generally unwanted behaviour, as it can widen the select 
 significantly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9358) Create LAST_DAY UDF

2015-01-20 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-9358:
-
Labels: TODOC15  (was: )

 Create LAST_DAY UDF
 ---

 Key: HIVE-9358
 URL: https://issues.apache.org/jira/browse/HIVE-9358
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-9358.1.patch, HIVE-9358.2.patch


 LAST_DAY returns the date of the last day of the month that contains date:
 last_day('2015-01-14') = '2015-01-31'
 last_day('2016-02-01') = '2016-02-29'
 last_day function went from oracle  
 http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions072.htm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9358) Create LAST_DAY UDF

2015-01-20 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283599#comment-14283599
 ] 

Lefty Leverenz commented on HIVE-9358:
--

Doc note:  This should be documented in the Date Functions section of the UDFs 
wikidoc (with version information and a link to this issue).

* [Hive Operators and UDFs -- Date Functions | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions]

A release note would also be helpful.

 Create LAST_DAY UDF
 ---

 Key: HIVE-9358
 URL: https://issues.apache.org/jira/browse/HIVE-9358
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-9358.1.patch, HIVE-9358.2.patch


 LAST_DAY returns the date of the last day of the month that contains date:
 last_day('2015-01-14') = '2015-01-31'
 last_day('2016-02-01') = '2016-02-29'
 last_day function went from oracle  
 http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions072.htm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283658#comment-14283658
 ] 

Hive QA commented on HIVE-6617:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693247/HIVE-6617.01.patch

{color:red}ERROR:{color} -1 due to 505 failed/errored test(s), 7323 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_allcolref_in_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_index
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_coltype
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_rename_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_groupby2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_archive_multi
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_array_map_access_nonconstant
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_1_sql_std
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_admin_almighty1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_admin_almighty2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_cli_nonsql
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_create_func1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_create_macro1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_delete
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_delete_own_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_explain
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_grant_option_role
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_insert
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_non_id
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_owner_actions_db
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_role_grant1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_role_grant2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_set_show_current_role
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_show_grant
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_update
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_update_own_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_view_sqlstd
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join19
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join25
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table_udfs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_gby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_semijoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_simple_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_windowing
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_char_cast
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_char_udf1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_combine2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view_translate
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas_colname
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas_uses_database_location
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_database
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_date_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_date_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_showlocks
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_10_0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_all_non_partitioned

[jira] [Commented] (HIVE-9371) Execution error for Parquet table and GROUP BY involving CHAR data type

2015-01-20 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283837#comment-14283837
 ] 

Ferdinand Xu commented on HIVE-9371:


It failed when executing the command:
explain select value, sum(cast(key as int)), count(*) numrows
from char_2
group by value
order by value asc
limit 5;

The GroupByOperator got the writableHiveCharObjectInspector to parse a Text 
object which should be WritableStringObjectInspector. 

 Execution error for Parquet table and GROUP BY involving CHAR data type
 ---

 Key: HIVE-9371
 URL: https://issues.apache.org/jira/browse/HIVE-9371
 Project: Hive
  Issue Type: Bug
  Components: File Formats, Query Processor
Reporter: Matt McCline
Priority: Critical

 Query fails involving PARQUET table format, CHAR data type, and GROUP BY.
 Probably also fails for VARCHAR, too.
 {noformat}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to 
 org.apache.hadoop.hive.serde2.io.HiveCharWritable
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:814)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:493)
   ... 10 more
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be 
 cast to org.apache.hadoop.hive.serde2.io.HiveCharWritable
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveCharObjectInspector.copyObject(WritableHiveCharObjectInspector.java:104)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:305)
   at 
 org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
   at 
 org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
   at 
 org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:827)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:739)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:809)
   ... 16 more
 {noformat}
 Here is a q file:
 {noformat}
 SET hive.vectorized.execution.enabled=false;
 drop table char_2;
 create table char_2 (
   key char(10),
   value char(20)
 ) stored as parquet;
 insert overwrite table char_2 select * from src;
 select value, sum(cast(key as int)), count(*) numrows
 from src
 group by value
 order by value asc
 limit 5;
 explain select value, sum(cast(key as int)), count(*) numrows
 from char_2
 group by value
 order by value asc
 limit 5;
 -- should match the query from src
 select value, sum(cast(key as int)), count(*) numrows
 from char_2
 group by value
 order by value asc
 limit 5;
 select value, sum(cast(key as int)), count(*) numrows
 from src
 group by value
 order by value desc
 limit 5;
 explain select value, sum(cast(key as int)), count(*) numrows
 from char_2
 group by value
 order by value desc
 limit 5;
 -- should match the query from src
 select value, sum(cast(key as int)), count(*) numrows
 from char_2
 group by value
 order by value desc
 limit 5;
 drop table char_2;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9396) date_add()/date_sub() should allow tinyint/smallint/bigint arguments in addition to int

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283963#comment-14283963
 ] 

Hive QA commented on HIVE-9396:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693314/HIVE-9396.3.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2439/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2439/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2439/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2439/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target itests/target itests/thirdparty 
itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target 
itests/hive-unit-hadoop2/target itests/hive-minikdc/target 
itests/hive-unit/target itests/custom-serde/target itests/util/target 
itests/qtest-spark/target hcatalog/target hcatalog/core/target 
hcatalog/streaming/target hcatalog/server-extensions/target 
hcatalog/hcatalog-pig-adapter/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target accumulo-handler/target hwi/target 
common/target common/src/gen spark-client/target service/target contrib/target 
serde/target beeline/target odbc/target cli/target 
ql/dependency-reduced-pom.xml ql/target
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1653280.

At revision 1653280.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693314 - PreCommit-HIVE-TRUNK-Build

 date_add()/date_sub() should allow tinyint/smallint/bigint arguments in 
 addition to int
 ---

 Key: HIVE-9396
 URL: https://issues.apache.org/jira/browse/HIVE-9396
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Sergio Peña
 Attachments: HIVE-9396.3.patch


 {noformat}
 hive select c1, date_add('1985-01-01', c1) from short1;
 FAILED: SemanticException [Error 10014]: Line 1:11 Wrong arguments 'c1':  
 DATE_ADD() only takes INT types as second  argument, got SHORT
 {noformat}
 We should allow date_add()/date_sub() to take any integral type for the 2nd 
 argument, rather than just int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9415) pushJoinFilters is not required any more at time of plan generation

2015-01-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9415:
---
Status: Open  (was: Patch Available)

 pushJoinFilters is not required any more at time of plan generation 
 

 Key: HIVE-9415
 URL: https://issues.apache.org/jira/browse/HIVE-9415
 Project: Hive
  Issue Type: Task
  Components: Logical Optimizer, Query Processor
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-9415.patch


 Now that PPD  PTP has been pretty stable for some time now, this is just a 
 repeated logic between plan generation and plan optimization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9417) Fix failing test groupby_grouping_window.q on trunk

2015-01-20 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-9417:
--

 Summary: Fix failing test groupby_grouping_window.q on trunk
 Key: HIVE-9417
 URL: https://issues.apache.org/jira/browse/HIVE-9417
 Project: Hive
  Issue Type: Test
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan


Because of successive commits of HIVE-4809  HIVE-9347 didnt get catch it in 
Hive QA run. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29961: HIVE-9302 Beeline add jar local to client

2015-01-20 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29961/#review68733
---


Hi Ferdinand!

What license are the drivers under? We'll have to make sure they both fit 
under: http://www.apache.org/legal/resolved.html

As opposed to that, I wonder if we can create some dummy class which is used to 
generate a Driver? Then you can passing a url like jdbc:mockdb:// and we don't 
have to ship a real jar?


beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java
https://reviews.apache.org/r/29961/#comment113115

Add + e to end of string



beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java
https://reviews.apache.org/r/29961/#comment113116

Add + e to end of string



beeline/src/java/org/apache/hive/beeline/Commands.java
https://reviews.apache.org/r/29961/#comment113117

after this line we should add:

beeLine.error(e);



beeline/src/java/org/apache/hive/beeline/Commands.java
https://reviews.apache.org/r/29961/#comment113118

after this line we should add:

beeLine.error(e);



beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java
https://reviews.apache.org/r/29961/#comment113119

after this line we should add:

beeLine.error(e);



beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java
https://reviews.apache.org/r/29961/#comment113120

after this line we should add:

beeLine.error(e);



beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java
https://reviews.apache.org/r/29961/#comment113121

after this line we should add:

beeLine.error(e);



beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java
https://reviews.apache.org/r/29961/#comment113122

after this line we should add:

beeLine.error(e);


- Brock Noland


On Jan. 16, 2015, 6:17 a.m., cheng xu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29961/
 ---
 
 (Updated Jan. 16, 2015, 6:17 a.m.)
 
 
 Review request for hive, Brock Noland, Dong Chen, and Sergio Pena.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Support adding local driver jar file in the beeline side and add unit test 
 for it
 
 
 Diffs
 -
 
   beeline/src/java/org/apache/hive/beeline/BeeLine.java 630ead4 
   beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java 065eab4 
   beeline/src/java/org/apache/hive/beeline/Commands.java 291adba 
   beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java 8ba0232 
   beeline/src/main/resources/BeeLine.properties d038d46 
   beeline/src/test/org/apache/hive/beeline/TestBeelineArgParsing.java a6ee93a 
   beeline/src/test/resources/mysql-connector-java-bin.jar PRE-CREATION 
   beeline/src/test/resources/postgresql-9.3.jdbc3.jar PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/29961/diff/
 
 
 Testing
 ---
 
 Manullay test done.
 Newly added test passed.
 
 
 Thanks,
 
 cheng xu
 




[jira] [Commented] (HIVE-9411) Improve error messages in TestMultiOutputFormat

2015-01-20 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284065#comment-14284065
 ] 

Xuefu Zhang commented on HIVE-9411:
---

+1

 Improve error messages in TestMultiOutputFormat
 ---

 Key: HIVE-9411
 URL: https://issues.apache.org/jira/browse/HIVE-9411
 Project: Hive
  Issue Type: Task
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Minor
 Fix For: 0.15.0

 Attachments: HIVE-9411.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9417) Fix failing test groupby_grouping_window.q on trunk

2015-01-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9417:
---
Status: Patch Available  (was: Open)

 Fix failing test groupby_grouping_window.q on trunk
 ---

 Key: HIVE-9417
 URL: https://issues.apache.org/jira/browse/HIVE-9417
 Project: Hive
  Issue Type: Test
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-9417.patch


 Because of successive commits of HIVE-4809  HIVE-9347 didnt get catch it in 
 Hive QA run. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9417) Fix failing test groupby_grouping_window.q on trunk

2015-01-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9417:
---
Attachment: HIVE-9417.patch

Simple golden file update.

 Fix failing test groupby_grouping_window.q on trunk
 ---

 Key: HIVE-9417
 URL: https://issues.apache.org/jira/browse/HIVE-9417
 Project: Hive
  Issue Type: Test
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-9417.patch


 Because of successive commits of HIVE-4809  HIVE-9347 didnt get catch it in 
 Hive QA run. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9417) Fix failing test groupby_grouping_window.q on trunk

2015-01-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9417:
---
Description: Because of successive commits of HIVE-4809  HIVE-9347 didnt 
get caught it in Hive QA run.   (was: Because of successive commits of 
HIVE-4809  HIVE-9347 didnt get catch it in Hive QA run. )

 Fix failing test groupby_grouping_window.q on trunk
 ---

 Key: HIVE-9417
 URL: https://issues.apache.org/jira/browse/HIVE-9417
 Project: Hive
  Issue Type: Test
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-9417.patch


 Because of successive commits of HIVE-4809  HIVE-9347 didnt get caught it in 
 Hive QA run. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-1869) TestMTQueries failing on jenkins

2015-01-20 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284071#comment-14284071
 ] 

Xuefu Zhang commented on HIVE-1869:
---

+1

 TestMTQueries failing on jenkins
 

 Key: HIVE-1869
 URL: https://issues.apache.org/jira/browse/HIVE-1869
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Testing Infrastructure
Affects Versions: 0.15.0
Reporter: Carl Steinbach
Assignee: Brock Noland
 Attachments: HIVE-1869.1.patch, HIVE-1869.1.patch, TestMTQueries.log


 TestMTQueries has been failing intermittently on Hudson. The first failure I 
 can find
 a record of on Hudson is from svn rev 1052414 on December 24th, but it's 
 likely that the failures actually started earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9396) date_add()/date_sub() should allow tinyint/smallint/bigint arguments in addition to int

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9396:
--
Status: Patch Available  (was: Open)

 date_add()/date_sub() should allow tinyint/smallint/bigint arguments in 
 addition to int
 ---

 Key: HIVE-9396
 URL: https://issues.apache.org/jira/browse/HIVE-9396
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Sergio Peña
 Attachments: HIVE-9396.3.patch


 {noformat}
 hive select c1, date_add('1985-01-01', c1) from short1;
 FAILED: SemanticException [Error 10014]: Line 1:11 Wrong arguments 'c1':  
 DATE_ADD() only takes INT types as second  argument, got SHORT
 {noformat}
 We should allow date_add()/date_sub() to take any integral type for the 2nd 
 argument, rather than just int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9396) date_add()/date_sub() should allow tinyint/smallint/bigint arguments in addition to int

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9396:
--
Attachment: (was: HIVE-9396.2.patch)

 date_add()/date_sub() should allow tinyint/smallint/bigint arguments in 
 addition to int
 ---

 Key: HIVE-9396
 URL: https://issues.apache.org/jira/browse/HIVE-9396
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Sergio Peña
 Attachments: HIVE-9396.3.patch


 {noformat}
 hive select c1, date_add('1985-01-01', c1) from short1;
 FAILED: SemanticException [Error 10014]: Line 1:11 Wrong arguments 'c1':  
 DATE_ADD() only takes INT types as second  argument, got SHORT
 {noformat}
 We should allow date_add()/date_sub() to take any integral type for the 2nd 
 argument, rather than just int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8121) Create micro-benchmarks for ParquetSerde and evaluate performance

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8121:
--
Attachment: (was: HIVE-8121.5.patch)

 Create micro-benchmarks for ParquetSerde and evaluate performance
 -

 Key: HIVE-8121
 URL: https://issues.apache.org/jira/browse/HIVE-8121
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Attachments: HIVE-8121.6.patch


 These benchmarks should not execute queries but test only the ParquetSerde 
 code to ensure we are as efficient as possible. 
 The output of this JIRA is:
 1) Benchmark tool exists
 2) We create new tasks under HIVE-8120 to track the improvements required



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8121) Create micro-benchmarks for ParquetSerde and evaluate performance

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8121:
--
Attachment: HIVE-8121.6.patch

 Create micro-benchmarks for ParquetSerde and evaluate performance
 -

 Key: HIVE-8121
 URL: https://issues.apache.org/jira/browse/HIVE-8121
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Attachments: HIVE-8121.6.patch


 These benchmarks should not execute queries but test only the ParquetSerde 
 code to ensure we are as efficient as possible. 
 The output of this JIRA is:
 1) Benchmark tool exists
 2) We create new tasks under HIVE-8120 to track the improvements required



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9396) date_add()/date_sub() should allow tinyint/smallint/bigint arguments in addition to int

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9396:
--
Status: Open  (was: Patch Available)

 date_add()/date_sub() should allow tinyint/smallint/bigint arguments in 
 addition to int
 ---

 Key: HIVE-9396
 URL: https://issues.apache.org/jira/browse/HIVE-9396
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Sergio Peña
 Attachments: HIVE-9396.3.patch


 {noformat}
 hive select c1, date_add('1985-01-01', c1) from short1;
 FAILED: SemanticException [Error 10014]: Line 1:11 Wrong arguments 'c1':  
 DATE_ADD() only takes INT types as second  argument, got SHORT
 {noformat}
 We should allow date_add()/date_sub() to take any integral type for the 2nd 
 argument, rather than just int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9396) date_add()/date_sub() should allow tinyint/smallint/bigint arguments in addition to int

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9396:
--
Attachment: HIVE-9396.4.patch

Attaching new patch with tests updated so that in can apply patch on trunk

 date_add()/date_sub() should allow tinyint/smallint/bigint arguments in 
 addition to int
 ---

 Key: HIVE-9396
 URL: https://issues.apache.org/jira/browse/HIVE-9396
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Sergio Peña
 Attachments: HIVE-9396.3.patch, HIVE-9396.4.patch


 {noformat}
 hive select c1, date_add('1985-01-01', c1) from short1;
 FAILED: SemanticException [Error 10014]: Line 1:11 Wrong arguments 'c1':  
 DATE_ADD() only takes INT types as second  argument, got SHORT
 {noformat}
 We should allow date_add()/date_sub() to take any integral type for the 2nd 
 argument, rather than just int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9396) date_add()/date_sub() should allow tinyint/smallint/bigint arguments in addition to int

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9396:
--
Status: Patch Available  (was: Open)

 date_add()/date_sub() should allow tinyint/smallint/bigint arguments in 
 addition to int
 ---

 Key: HIVE-9396
 URL: https://issues.apache.org/jira/browse/HIVE-9396
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Sergio Peña
 Attachments: HIVE-9396.3.patch, HIVE-9396.4.patch


 {noformat}
 hive select c1, date_add('1985-01-01', c1) from short1;
 FAILED: SemanticException [Error 10014]: Line 1:11 Wrong arguments 'c1':  
 DATE_ADD() only takes INT types as second  argument, got SHORT
 {noformat}
 We should allow date_add()/date_sub() to take any integral type for the 2nd 
 argument, rather than just int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9396) date_add()/date_sub() should allow tinyint/smallint/bigint arguments in addition to int

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9396:
--
Attachment: HIVE-9396.3.patch

 date_add()/date_sub() should allow tinyint/smallint/bigint arguments in 
 addition to int
 ---

 Key: HIVE-9396
 URL: https://issues.apache.org/jira/browse/HIVE-9396
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Sergio Peña
 Attachments: HIVE-9396.3.patch


 {noformat}
 hive select c1, date_add('1985-01-01', c1) from short1;
 FAILED: SemanticException [Error 10014]: Line 1:11 Wrong arguments 'c1':  
 DATE_ADD() only takes INT types as second  argument, got SHORT
 {noformat}
 We should allow date_add()/date_sub() to take any integral type for the 2nd 
 argument, rather than just int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9396) date_add()/date_sub() should allow tinyint/smallint/bigint arguments in addition to int

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9396:
--
Status: Open  (was: Patch Available)

Canceling patch to rename attachment, and run patch again.

 date_add()/date_sub() should allow tinyint/smallint/bigint arguments in 
 addition to int
 ---

 Key: HIVE-9396
 URL: https://issues.apache.org/jira/browse/HIVE-9396
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Sergio Peña
 Attachments: HIVE-9396.3.patch


 {noformat}
 hive select c1, date_add('1985-01-01', c1) from short1;
 FAILED: SemanticException [Error 10014]: Line 1:11 Wrong arguments 'c1':  
 DATE_ADD() only takes INT types as second  argument, got SHORT
 {noformat}
 We should allow date_add()/date_sub() to take any integral type for the 2nd 
 argument, rather than just int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8121) Create micro-benchmarks for ParquetSerde and evaluate performance

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8121:
--
Status: Patch Available  (was: Open)

 Create micro-benchmarks for ParquetSerde and evaluate performance
 -

 Key: HIVE-8121
 URL: https://issues.apache.org/jira/browse/HIVE-8121
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Attachments: HIVE-8121.6.patch


 These benchmarks should not execute queries but test only the ParquetSerde 
 code to ensure we are as efficient as possible. 
 The output of this JIRA is:
 1) Benchmark tool exists
 2) We create new tasks under HIVE-8120 to track the improvements required



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8121) Create micro-benchmarks for ParquetSerde and evaluate performance

2015-01-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8121:
--
Status: Open  (was: Patch Available)

Cancel patch to rename attachment, then submit patch again.

 Create micro-benchmarks for ParquetSerde and evaluate performance
 -

 Key: HIVE-8121
 URL: https://issues.apache.org/jira/browse/HIVE-8121
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Attachments: HIVE-8121.6.patch


 These benchmarks should not execute queries but test only the ParquetSerde 
 code to ensure we are as efficient as possible. 
 The output of this JIRA is:
 1) Benchmark tool exists
 2) We create new tasks under HIVE-8120 to track the improvements required



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9409) Spark branch, ClassNotFoundException: org.apache.commons.logging.impl.SLF4JLocationAwareLog occurs during some hive query case execution [Spark Branch]

2015-01-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283982#comment-14283982
 ] 

Brock Noland commented on HIVE-9409:


Nice investigation! Hive logging is a bit of a mess as we use all possible 
loggers in various locations. Thus I think the simplest solution is to make the 
logger static, then it will be loaded appropriately on the server side. Sound 
good?

 Spark branch, ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog occurs during some hive 
 query case execution [Spark Branch]
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao
Assignee: Rui Li

 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27) by 
 default mode (i.e. just Hive on MR, not HiveOnSpark),  Error 
 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:657)
 at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:169)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find cl
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 

[jira] [Commented] (HIVE-9408) Add hook interface so queries can be redacted before being placed in job.xml

2015-01-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284199#comment-14284199
 ] 

Brock Noland commented on HIVE-9408:


Hi,

Assume a query has both a credit card number and a phone number. Then you could 
create {{CreditCardRedactor}} and {{PhoneNumberRedactor}}. If the configuration 
was {{CreditCardRedactor,PhoneNumberRedactor}} then you'd first redact the 
credit card, pass the redacted query into the phone number redactor. After the 
loop both would be redacted.

Instead of having a list we could have a single value which would have to do 
both, but the other hooks are a list of CSV class names.

Brock

 Add hook interface so queries can be redacted before being placed in job.xml
 

 Key: HIVE-9408
 URL: https://issues.apache.org/jira/browse/HIVE-9408
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-9408.1.patch, HIVE-9408.2.patch, HIVE-9408.3.patch


 Today we take a query and place it in the job.xml file which is pushed to all 
 nodes the query runs on. However it's possible the query contains sensitive 
 information and should not directly be shown to users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9408) Add hook interface so queries can be redacted before being placed in job.xml

2015-01-20 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284204#comment-14284204
 ] 

Xuefu Zhang commented on HIVE-9408:
---

Okay. thanks for the explanation.
+1

 Add hook interface so queries can be redacted before being placed in job.xml
 

 Key: HIVE-9408
 URL: https://issues.apache.org/jira/browse/HIVE-9408
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-9408.1.patch, HIVE-9408.2.patch, HIVE-9408.3.patch


 Today we take a query and place it in the job.xml file which is pushed to all 
 nodes the query runs on. However it's possible the query contains sensitive 
 information and should not directly be shown to users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9272) Tests for utf-8 support

2015-01-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-9272:
-
Attachment: HIVE-9272.4.patch

patch 4 is the same as 3.  For some reason the build bot is picking this up

 Tests for utf-8 support
 ---

 Key: HIVE-9272
 URL: https://issues.apache.org/jira/browse/HIVE-9272
 Project: Hive
  Issue Type: Test
  Components: Tests, WebHCat
Affects Versions: 0.14.0
Reporter: Aswathy Chellammal Sreekumar
Assignee: Aswathy Chellammal Sreekumar
Priority: Minor
 Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.3.patch, 
 HIVE-9272.4.patch, HIVE-9272.patch


 Including some test cases for utf8 support in webhcat. The first four tests 
 invoke hive, pig, mapred and streaming apis for testing the utf8 support for 
 data processed, file names and job name. The last test case tests the 
 filtering of job name with utf8 character



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9272) Tests for utf-8 support

2015-01-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-9272:
-
Status: Patch Available  (was: Open)

 Tests for utf-8 support
 ---

 Key: HIVE-9272
 URL: https://issues.apache.org/jira/browse/HIVE-9272
 Project: Hive
  Issue Type: Test
  Components: Tests, WebHCat
Affects Versions: 0.14.0
Reporter: Aswathy Chellammal Sreekumar
Assignee: Aswathy Chellammal Sreekumar
Priority: Minor
 Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.3.patch, 
 HIVE-9272.4.patch, HIVE-9272.patch


 Including some test cases for utf8 support in webhcat. The first four tests 
 invoke hive, pig, mapred and streaming apis for testing the utf8 support for 
 data processed, file names and job name. The last test case tests the 
 filtering of job name with utf8 character



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9403) File tests determinism with multiple reducers

2015-01-20 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9403:
--
Attachment: HIVE-9403.2-spark.patch

Attached the patch for spark branch. Thanks.

 File tests determinism with multiple reducers
 -

 Key: HIVE-9403
 URL: https://issues.apache.org/jira/browse/HIVE-9403
 Project: Hive
  Issue Type: Test
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.15.0

 Attachments: HIVE-9403.1.patch, HIVE-9403.2-spark.patch, 
 HIVE-9403.2.patch


 If multiple reducers are used, some test result order need to be 
 deterministic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9272) Tests for utf-8 support

2015-01-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-9272:
-
Status: Open  (was: Patch Available)

 Tests for utf-8 support
 ---

 Key: HIVE-9272
 URL: https://issues.apache.org/jira/browse/HIVE-9272
 Project: Hive
  Issue Type: Test
  Components: Tests, WebHCat
Affects Versions: 0.14.0
Reporter: Aswathy Chellammal Sreekumar
Assignee: Aswathy Chellammal Sreekumar
Priority: Minor
 Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.3.patch, 
 HIVE-9272.patch


 Including some test cases for utf8 support in webhcat. The first four tests 
 invoke hive, pig, mapred and streaming apis for testing the utf8 support for 
 data processed, file names and job name. The last test case tests the 
 filtering of job name with utf8 character



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9408) Add hook interface so queries can be redacted before being placed in job.xml

2015-01-20 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284082#comment-14284082
 ] 

Xuefu Zhang commented on HIVE-9408:
---

Patch looks good. However, I don't quite understand the following code snippet:
{code}
+  ListRedactor queryRedactors = getHooks(ConfVars.QUERYREDACTORHOOKS, 
Redactor.class);
+  for (Redactor redactor : queryRedactors) {
+redactor.setConf(conf);
+queryStr = redactor.redactQuery(queryStr);
+  }
{code}
it seems that queryStr is just overwritten over and over again in the loop.

 Add hook interface so queries can be redacted before being placed in job.xml
 

 Key: HIVE-9408
 URL: https://issues.apache.org/jira/browse/HIVE-9408
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-9408.1.patch, HIVE-9408.2.patch, HIVE-9408.3.patch


 Today we take a query and place it in the job.xml file which is pushed to all 
 nodes the query runs on. However it's possible the query contains sensitive 
 information and should not directly be shown to users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8121) Create micro-benchmarks for ParquetSerde and evaluate performance

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284101#comment-14284101
 ] 

Hive QA commented on HIVE-8121:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693315/HIVE-8121.6.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7332 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_window
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2440/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2440/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2440/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693315 - PreCommit-HIVE-TRUNK-Build

 Create micro-benchmarks for ParquetSerde and evaluate performance
 -

 Key: HIVE-8121
 URL: https://issues.apache.org/jira/browse/HIVE-8121
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Attachments: HIVE-8121.6.patch


 These benchmarks should not execute queries but test only the ParquetSerde 
 code to ensure we are as efficient as possible. 
 The output of this JIRA is:
 1) Benchmark tool exists
 2) We create new tasks under HIVE-8120 to track the improvements required



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29954: HIVE-9179. Add listener API to JobHandle.

2015-01-20 Thread Marcelo Vanzin


 On Jan. 17, 2015, 12:19 a.m., Xuefu Zhang wrote:
  spark-client/src/main/java/org/apache/hive/spark/client/JobHandleImpl.java, 
  line 179
  https://reviews.apache.org/r/29954/diff/1-2/?file=823286#file823286line179
 
  Sorry I didn't get it, but why?
  Clarity but not perf is my concern. Here we are notifying listeners 
  with a new Spark job ID, which is done in the for loop, which is 
  synchronized. This means no listener may be added or removed from the 
  listeners. On the other hand, sparkJobIds.add(sparkJobId) seems irrelevant 
  to any changes to listeners, unless I missed anything. I don't understand 
  why either of the two cases might happen as you suggested.
 
 Marcelo Vanzin wrote:
 Threads: T1 updating the job handle, T2 adding a listener
 
 Case 1:
Statement 1 (S1): sparkJobIds.add(sparkJobId);
Statement 2 (S2): synchronized (listeners) { /* call 
 onSparkJobStarted(newSparkJobId) on every listener */ }
 
 Timeline:
 T1: executes S1
 T2: calls addListener(), new listener is notified of the sparkJobId added 
 above
 T1: executes S2. New listener is notified again of new spark job ID.
 
 
 Case 2:
   Invert S1 and S2.
   
 T2: calls addListener()
 T1: executes S1. Listener is called with the current state of the handle 
 and new Spark job ID. Listener checks 
 `handle.getSparkJobIDs().contains(newSparkJobId)`, check fails.
 
 
 Those seem pretty easy to understand to me. The current code avoids both 
 of them.
 
 Xuefu Zhang wrote:
 I see. So the shared state of the job handler consists of state, 
 listeners, and sparkJobIds, which needs to be protected. Thus, I'd suggest we 
 change synchronize(listeners) to synchronized(this) or declare the method as 
 synchronized. No essential difference, but for better clarity.

The synchronization is *only* needed because of the listeners. It's there so 
that when you add a listener, you never miss an event - if they didn't exist, 
you wouldn't need any synchronization anywhere in this class. So it makes 
better sense to synchronize on the listeners.


- Marcelo


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29954/#review68513
---


On Jan. 16, 2015, 11:24 p.m., Marcelo Vanzin wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29954/
 ---
 
 (Updated Jan. 16, 2015, 11:24 p.m.)
 
 
 Review request for hive, Brock Noland, chengxiang li, and Xuefu Zhang.
 
 
 Bugs: HIVE-9179
 https://issues.apache.org/jira/browse/HIVE-9179
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-9179. Add listener API to JobHandle.
 
 
 Diffs
 -
 
   spark-client/pom.xml 77016df61a0bcbd94058bcbd2825c6c210a70e14 
   spark-client/src/main/java/org/apache/hive/spark/client/BaseProtocol.java 
 f9c10b196ab47b5b4f4c0126ad455869ab68f0ca 
   spark-client/src/main/java/org/apache/hive/spark/client/JobHandle.java 
 e760ce35d92bedf4d301b08ec57d1c2dc37a39f0 
   spark-client/src/main/java/org/apache/hive/spark/client/JobHandleImpl.java 
 1b8feedb0b23aa7897dc6ac37ea5c0209e71d573 
   spark-client/src/main/java/org/apache/hive/spark/client/RemoteDriver.java 
 0d49ed3d9e33ca08d6a7526c1c434a0dd0a06a67 
   
 spark-client/src/main/java/org/apache/hive/spark/client/SparkClientImpl.java 
 a30d8cbbaae9d25b1cffdc286b546f549e439545 
   spark-client/src/test/java/org/apache/hive/spark/client/TestJobHandle.java 
 PRE-CREATION 
   
 spark-client/src/test/java/org/apache/hive/spark/client/TestSparkClient.java 
 795d62c776cec5e9da2a24b7d40bc749a03186ab 
 
 Diff: https://reviews.apache.org/r/29954/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Marcelo Vanzin
 




[jira] [Commented] (HIVE-9417) Fix failing test groupby_grouping_window.q on trunk

2015-01-20 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284088#comment-14284088
 ] 

Xuefu Zhang commented on HIVE-9417:
---

+1

 Fix failing test groupby_grouping_window.q on trunk
 ---

 Key: HIVE-9417
 URL: https://issues.apache.org/jira/browse/HIVE-9417
 Project: Hive
  Issue Type: Test
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-9417.patch


 Because of successive commits of HIVE-4809  HIVE-9347 didnt get caught it in 
 Hive QA run. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9394) SparkCliDriver tests have sporadic timeout error

2015-01-20 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284322#comment-14284322
 ] 

Xuefu Zhang commented on HIVE-9394:
---

+1

 SparkCliDriver tests have sporadic timeout error
 

 Key: HIVE-9394
 URL: https://issues.apache.org/jira/browse/HIVE-9394
 Project: Hive
  Issue Type: Test
  Components: Tests
Affects Versions: 0.15.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-9394.patch


 There have been some sporadic exceptions in pre-commit tests like:
 {noformat}
 2015-01-15 08:31:40,805 WARN  [main]: client.SparkClientImpl 
 (SparkClientImpl.java:init(90)) - Error while waiting for client to connect.
 java.util.concurrent.ExecutionException: 
 java.util.concurrent.TimeoutException: Timed out waiting for client 
 connection.
   at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37)
   at 
 org.apache.hive.spark.client.SparkClientImpl.init(SparkClientImpl.java:88)
   at 
 org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:75)
   at 
 org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.init(RemoteHiveSparkClient.java:82)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:53)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:56)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:128)
   at 
 org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:84)
   at 
 org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:96)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1634)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1393)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1179)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1045)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1035)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305)
   at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:882)
   at 
 org.apache.hadoop.hive.cli.TestSparkCliDriver.runTest(TestSparkCliDriver.java:234)
   at 
 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_alter_merge_orc(TestSparkCliDriver.java:162)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: java.util.concurrent.TimeoutException: Timed out waiting for 
 client connection.
   at org.apache.hive.spark.client.rpc.RpcServer$2.run(RpcServer.java:125)
   at 
 io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
   at 
 io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:123)
   at 
 

[jira] [Created] (HIVE-9422) LLAP: row-level SARGs

2015-01-20 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-9422:
--

 Summary: LLAP: row-level SARGs
 Key: HIVE-9422
 URL: https://issues.apache.org/jira/browse/HIVE-9422
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin


When VRBs are built from encoded data, sargs can be applied on low level to 
reduce the number of rows to process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-20 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9327:
--
Attachment: HIVE-9327.02.patch

New patch that solves the problem with PPD optimization.

 CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
 ---

 Key: HIVE-9327
 URL: https://issues.apache.org/jira/browse/HIVE-9327
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0

 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, HIVE-9327.patch


 ParseContext includes a map of Operator to RowResolver (OpParseContext). It 
 would be ideal to remove this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30086: HIVE-9327

2015-01-20 Thread Jesús Camacho Rodríguez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30086/
---

Review request for hive.


Bugs: HIVE-9327
https://issues.apache.org/jira/browse/HIVE-9327


Repository: hive-git


Description
---

ParseContext includes a map of Operator to RowResolver (OpParseContext). It 
would be ideal to remove this.


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnInfo.java 
a34a31d5dde99896c809ea481db4ff6144426da7 
  ql/src/java/org/apache/hadoop/hive/ql/exec/RowSchema.java 
450d7f364d1d83facf58e85a9bce87029b53d0d9 
  ql/src/java/org/apache/hadoop/hive/ql/exec/SelectOperator.java 
1dbcb067c7c60d5b05141d0c59c6a59fa5b7418d 
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 
9ed2c61bcf2be97b7f89f52887bc36a7780898f5 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/AbstractSMBJoinProc.java 
a948b19a18125260df51ba8655226ed4efec510c 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPruner.java 
046a52f85efc41d65ee6e140e124d5e1d21494fc 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcCtx.java 
5d848a1291d2292a0a569e9e88c32bc6a7a0d646 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java 
abf32f179911934e48906d2a59b9d87952e77a13 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagate.java 
14e20dd6966b87d8d3fbfcaffbfe1f1a4d6b8127 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcCtx.java 
91af3aa5c70eb870fe692e9be0ec533b51ff0439 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcFactory.java
 d692e8e5267c513c70efd978b2f2a6e2c14007c5 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java 
567c42e4c8f69c9276ebd202a7a2fc02f28d6ccc 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 
b00fa52d03ebd66cc60793bb1e236ee7c18ac69a 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java 
d849dcf33f4d0d48696ff227a7594b330f5c8b63 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/NonBlockingOpDeDupProc.java 
175a53cd084e3e628a17365c826c8ae09c1ab4dc 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ReduceSinkMapJoinProc.java 
ed6f7132077144327642fece0a0737b3ea71cbe7 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/SkewJoinOptimizer.java 
28d8201654f6080987f83775c4326ea16fe5a141 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedDynPartitionOptimizer.java
 e16ba6c004dc5b3ce734c32d977fd0985b5d1207 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/SparkMapJoinProcessor.java 
c69d492ea0fda9a9a2460e88725ea498dce48ddd 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/CorrelationOptimizer.java
 20655c1406c70b060a19eea7b6b57b726b5490ea 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/CorrelationUtilities.java
 dc906e8fa3a19cc5aa169c1fe5a8f619807efb38 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/QueryPlanTreeTransformation.java
 080725b933c6574c81a181cac960a4a739b576f6 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteGBUsingIndex.java 
06a9478283437dd020cd199e139bc3eedb9c15f1 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndexCtx.java
 72f458830a8474e6d4aa9d5788c8de509185d34f 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/ExprProcCtx.java 
d3caaf0b8f1859cbd246f2b9782d49bb2deb42ff 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/ExprProcFactory.java 
fdbb93ed07648cc6720b0d00981a8255a077191f 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/OpProcFactory.java 
d6a6ed6709b8a280e490909ffd499f7ea48dc77a 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinTaskDispatcher.java
 19503dc0262988fb5bed86c8a12326ebc529c7c4 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SortMergeJoinTaskDispatcher.java
 a135cf50fdce1e407192f9f634172f7ff12bd0aa 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java
 9ff47c73a52d4b6319eb2dfae64b01bdcdf671fd 
  ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezWork.java 
6a87929b70df81823ed0d4fbef92dc4e6f1acdb9 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java 
b838bff598bdc6c8d4c2728967d2c2bf0ee63e9e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/RowResolver.java 
469dc9f4062387580c98bebc61e2c972b5d32d84 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
4364f2830d35b89266bf79948263dd64998fe5cc 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java 
f2eb4d27fca6472a9d4b777a54fcce8a729b3cd6 
  ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java 
9abb8db04fc2d6fced7f4e0aa4628df8575e2237 
  ql/src/java/org/apache/hadoop/hive/ql/ppd/ExprWalkerInfo.java 
9bed527d15cf50c60fecded8074228657bfb7e58 
  ql/src/java/org/apache/hadoop/hive/ql/ppd/ExprWalkerProcFactory.java 
8a8b0d5be0fe884693f370557e3b3e8a822b8f97 
  

[jira] [Updated] (HIVE-9423) HiveServer2: handle max handler thread exhaustion gracefully

2015-01-20 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-9423:
---
Affects Version/s: 0.12.0
   0.13.0
   0.14.0

 HiveServer2: handle max handler thread exhaustion gracefully
 

 Key: HIVE-9423
 URL: https://issues.apache.org/jira/browse/HIVE-9423
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0, 0.13.0, 0.14.0, 0.15.0
Reporter: Vaibhav Gumashta

 It has been reported that when # of active client connections is greater than 
   {{hive.server2.thrift.max.worker.threads}}, HiveServer2 becomes 
 unresponsive. This should be handled more gracefully by the server and the 
 JDBC driver, so that the end user gets aware of the problem and can take 
 appropriate steps (either close existing connections or bump of the config 
 value or use multiple server instances with dynamic service discovery 
 enabled).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9272) Tests for utf-8 support

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284499#comment-14284499
 ] 

Hive QA commented on HIVE-9272:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693353/HIVE-9272.4.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7333 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_window
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2443/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2443/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2443/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693353 - PreCommit-HIVE-TRUNK-Build

 Tests for utf-8 support
 ---

 Key: HIVE-9272
 URL: https://issues.apache.org/jira/browse/HIVE-9272
 Project: Hive
  Issue Type: Test
  Components: Tests, WebHCat
Affects Versions: 0.14.0
Reporter: Aswathy Chellammal Sreekumar
Assignee: Aswathy Chellammal Sreekumar
Priority: Minor
 Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.3.patch, 
 HIVE-9272.4.patch, HIVE-9272.patch


 Including some test cases for utf8 support in webhcat. The first four tests 
 invoke hive, pig, mapred and streaming apis for testing the utf8 support for 
 data processed, file names and job name. The last test case tests the 
 filtering of job name with utf8 character



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6090) Audit logs for HiveServer2

2015-01-20 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HIVE-6090:
---
Attachment: HIVE-6090.1.patch

Uploading patch for unit tests to run. TestJdbcDriver2 passed with the changes.

 Audit logs for HiveServer2
 --

 Key: HIVE-6090
 URL: https://issues.apache.org/jira/browse/HIVE-6090
 Project: Hive
  Issue Type: Improvement
  Components: Diagnosability, HiveServer2
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
 Attachments: HIVE-6090.1.WIP.patch, HIVE-6090.1.patch, HIVE-6090.patch


 HiveMetastore has audit logs and would like to audit all queries or requests 
 to HiveServer2 also. This will help in understanding how the APIs were used, 
 queries submitted, users etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9272) Tests for utf-8 support

2015-01-20 Thread Aswathy Chellammal Sreekumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284644#comment-14284644
 ] 

Aswathy Chellammal Sreekumar commented on HIVE-9272:


Thanks [~ekoifman] and [~sushanth] for review comments and verification.

 Tests for utf-8 support
 ---

 Key: HIVE-9272
 URL: https://issues.apache.org/jira/browse/HIVE-9272
 Project: Hive
  Issue Type: Test
  Components: Tests, WebHCat
Affects Versions: 0.14.0
Reporter: Aswathy Chellammal Sreekumar
Assignee: Aswathy Chellammal Sreekumar
Priority: Minor
 Fix For: 0.15.0

 Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.3.patch, 
 HIVE-9272.4.patch, HIVE-9272.patch


 Including some test cases for utf8 support in webhcat. The first four tests 
 invoke hive, pig, mapred and streaming apis for testing the utf8 support for 
 data processed, file names and job name. The last test case tests the 
 filtering of job name with utf8 character



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9402) Create GREATEST and LEAST udf

2015-01-20 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9402:
--
Status: Patch Available  (was: In Progress)

 Create GREATEST and LEAST udf
 -

 Key: HIVE-9402
 URL: https://issues.apache.org/jira/browse/HIVE-9402
 Project: Hive
  Issue Type: Task
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9402.1.patch, HIVE-9402.2.patch, HIVE-9402.3.patch, 
 HIVE-9402.4.patch, HIVE-9402.4.patch, HIVE-9402.5.patch, HIVE-9402.5.patch


 GREATEST function returns the greatest value in a list of values
 Signature: T greatest(T v1, T v2, ...)
 all values should be the same type (like in COALESCE)
 LEAST returns the least value in a list of values
 Signature: T least(T v1, T v2, ...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9402) Create GREATEST and LEAST udf

2015-01-20 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9402:
--
Attachment: HIVE-9402.5.patch

build 2444 says - no tests executed
attaching HIVE-9402.5.patch again

 Create GREATEST and LEAST udf
 -

 Key: HIVE-9402
 URL: https://issues.apache.org/jira/browse/HIVE-9402
 Project: Hive
  Issue Type: Task
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9402.1.patch, HIVE-9402.2.patch, HIVE-9402.3.patch, 
 HIVE-9402.4.patch, HIVE-9402.4.patch, HIVE-9402.5.patch, HIVE-9402.5.patch


 GREATEST function returns the greatest value in a list of values
 Signature: T greatest(T v1, T v2, ...)
 all values should be the same type (like in COALESCE)
 LEAST returns the least value in a list of values
 Signature: T least(T v1, T v2, ...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9416) Get rid of Extract Operator

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284701#comment-14284701
 ] 

Hive QA commented on HIVE-9416:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693432/HIVE-9416.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2452/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2452/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2452/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2452/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ rm -rf
+ svn update
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693432 - PreCommit-HIVE-TRUNK-Build

 Get rid of Extract Operator
 ---

 Key: HIVE-9416
 URL: https://issues.apache.org/jira/browse/HIVE-9416
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-9416.patch


 {{Extract Operator}} has been there for legacy reasons. But there is no 
 functionality it provides which cant be provided by {{Select Operator}} 
 Instead of having two operators, one being subset of another we should just 
 get rid of {{Extract}} and simplify our codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-9371) Execution error for Parquet table and GROUP BY involving CHAR data type

2015-01-20 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu reassigned HIVE-9371:
--

Assignee: Ferdinand Xu

 Execution error for Parquet table and GROUP BY involving CHAR data type
 ---

 Key: HIVE-9371
 URL: https://issues.apache.org/jira/browse/HIVE-9371
 Project: Hive
  Issue Type: Bug
  Components: File Formats, Query Processor
Reporter: Matt McCline
Assignee: Ferdinand Xu
Priority: Critical

 Query fails involving PARQUET table format, CHAR data type, and GROUP BY.
 Probably also fails for VARCHAR, too.
 {noformat}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to 
 org.apache.hadoop.hive.serde2.io.HiveCharWritable
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:814)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:493)
   ... 10 more
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be 
 cast to org.apache.hadoop.hive.serde2.io.HiveCharWritable
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveCharObjectInspector.copyObject(WritableHiveCharObjectInspector.java:104)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:305)
   at 
 org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
   at 
 org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
   at 
 org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:827)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:739)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:809)
   ... 16 more
 {noformat}
 Here is a q file:
 {noformat}
 SET hive.vectorized.execution.enabled=false;
 drop table char_2;
 create table char_2 (
   key char(10),
   value char(20)
 ) stored as parquet;
 insert overwrite table char_2 select * from src;
 select value, sum(cast(key as int)), count(*) numrows
 from src
 group by value
 order by value asc
 limit 5;
 explain select value, sum(cast(key as int)), count(*) numrows
 from char_2
 group by value
 order by value asc
 limit 5;
 -- should match the query from src
 select value, sum(cast(key as int)), count(*) numrows
 from char_2
 group by value
 order by value asc
 limit 5;
 select value, sum(cast(key as int)), count(*) numrows
 from src
 group by value
 order by value desc
 limit 5;
 explain select value, sum(cast(key as int)), count(*) numrows
 from char_2
 group by value
 order by value desc
 limit 5;
 -- should match the query from src
 select value, sum(cast(key as int)), count(*) numrows
 from char_2
 group by value
 order by value desc
 limit 5;
 drop table char_2;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Attachment: HIVE-6617.02.patch

Addressed 
(1) remove from unreserved words (KW_ALL, KW_ALTER, 
KW_ARRAY,KW_AS,KW_AUTHORIZATION,KW_BETWEEN,KW_BIGINT,KW_BINARY,KW_BOOLEAN,KW_BOTH,KW_BY,KW_CREATE,KW_CUBE,KW_CURSOR,KW_DATE,KW_DECIMAL,KW_DEFAULT,KW_DELETE,KW_DESCRIBE,KW_DOUBLE,KW_DROP,KW_EXISTS,KW_EXTERNAL,KW_FALSE,KW_FETCH,KW_FLOAT,KW_FOR,KW_FULL,KW_GRANT,KW_GROUP,KW_GROUPING,KW_IMPORT,KW_IN,KW_INSERT,KW_INT,KW_INTERSECT,KW_INTO,KW_IS,KW_LATERAL,KW_LEFT,KW_LIKE,KW_LIMIT,KW_LOCAL,KW_NONE,KW_NULL,KW_OF,KW_ORDER,KW_OUT,KW_OUTER,KW_PARTITION,KW_PERCENT,KW_PROCEDURE,KW_RANGE,KW_READS,KW_REVOKE,KW_RIGHT,KW_ROLLUP,KW_ROW,KW_ROWS,KW_SET,KW_SMALLINT,KW_TABLE,KW_TIMESTAMP,KW_TO,KW_TRIGGER,KW_TRUNCATE,KW_UNION,KW_UPDATE,KW_USER,KW_USING,KW_VALUES,KW_WITH,KW_TRUE)
 following SQL 11
(2) function(star) problem
(3) partitionTableFunctionSource refers to partitionedTableFunction (need to 
confirm)

TODO
(1) LPAREN in expression grouping set
(2) tableSource??? (need to confirm)

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284507#comment-14284507
 ] 

Hive QA commented on HIVE-9327:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693377/HIVE-9327.02.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2445/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2445/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2445/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2445/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ rm -rf
+ svn update
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693377 - PreCommit-HIVE-TRUNK-Build

 CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
 ---

 Key: HIVE-9327
 URL: https://issues.apache.org/jira/browse/HIVE-9327
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0

 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, HIVE-9327.patch


 ParseContext includes a map of Operator to RowResolver (OpParseContext). It 
 would be ideal to remove this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9402) Create GREATEST and LEAST udf

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284504#comment-14284504
 ] 

Hive QA commented on HIVE-9402:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693363/HIVE-9402.5.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2444/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2444/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2444/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2444/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'hcatalog/src/test/e2e/templeton/deployers/deploy_e2e_artifacts.sh'
Reverted 'hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm'
Reverted 'hcatalog/src/test/e2e/templeton/build.xml'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target itests/target itests/thirdparty 
itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target 
itests/hive-unit-hadoop2/target itests/hive-minikdc/target 
itests/hive-unit/target itests/custom-serde/target itests/util/target 
itests/qtest-spark/target hcatalog/target 
hcatalog/src/test/e2e/templeton/tests/utf8.conf
+ svn update
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693363 - PreCommit-HIVE-TRUNK-Build

 Create GREATEST and LEAST udf
 -

 Key: HIVE-9402
 URL: https://issues.apache.org/jira/browse/HIVE-9402
 Project: Hive
  Issue Type: Task
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9402.1.patch, HIVE-9402.2.patch, HIVE-9402.3.patch, 
 HIVE-9402.4.patch, HIVE-9402.4.patch, HIVE-9402.5.patch


 GREATEST function returns the greatest value in a list of values
 Signature: T greatest(T v1, T v2, ...)
 all values should be the same type (like in COALESCE)
 LEAST returns the least value in a list of values
 Signature: T least(T v1, T v2, ...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9184) Modify HCatClient to support new notification methods in HiveMetaStoreClient

2015-01-20 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284503#comment-14284503
 ] 

Sushanth Sowmyan commented on HIVE-9184:


+1.

The test failures reported here seem disconnected.

 Modify HCatClient to support new notification methods in HiveMetaStoreClient
 

 Key: HIVE-9184
 URL: https://issues.apache.org/jira/browse/HIVE-9184
 Project: Hive
  Issue Type: New Feature
  Components: HCatalog
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.15.0

 Attachments: HIVE-9184.patch


 HIVE-9174 adds a new DbNotificationListener, and methods to 
 HiveMetaStoreClient to fetch the events.  The fetching of events should be 
 added to HCatClient as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284591#comment-14284591
 ] 

Hive QA commented on HIVE-6617:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693420/HIVE-6617.04.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2450/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2450/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2450/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2450/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ rm -rf
+ svn update
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693420 - PreCommit-HIVE-TRUNK-Build

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9422) LLAP: row-level vectorized SARGs

2015-01-20 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-9422:
--
Summary: LLAP: row-level vectorized SARGs  (was: LLAP: row-level SARGs)

 LLAP: row-level vectorized SARGs
 

 Key: HIVE-9422
 URL: https://issues.apache.org/jira/browse/HIVE-9422
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin

 When VRBs are built from encoded data, sargs can be applied on low level to 
 reduce the number of rows to process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30055: HIVE-9337 : Move more hive.spark.* configurations to HiveConf

2015-01-20 Thread Szehon Ho

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30055/
---

(Updated Jan. 20, 2015, 11:34 p.m.)


Review request for hive and chengxiang li.


Changes
---

Address review comments from Lefty and Brock.  Also in the descriptions, put 
'Hive' as the client in order to clarify it.

Looked at Chengxiang's suggestions a little more to use --conf to pass the 
values down to remote Spark driver, I guess I must have had a bug in my 
original attempt, and after fixing those ran a few basic tests and it seemed to 
work.


Bugs: HIVE-9337
https://issues.apache.org/jira/browse/HIVE-9337


Repository: hive-git


Description
---

This change allows the Remote Spark Driver's properties to be set dynamically 
via Hive configuration (ie, set commands).

Went through the Remote Spark Driver's properties and added them to HiveConf, 
fixing the descriptions so that they're more clear in a global context with 
other Hive properties.  Also fixed a bug in description that stated default 
value of max message size is 10MB, should read 50MB.  One open question is that 
I did not move 'hive.spark.log.dir' as I could not find where it was read, and 
did not know if its still being used somewhere?

The passing of these properties between client (Hive) and RemoteSparkDriver is 
done via the properties file.  One note is that these properties have to be 
appended with 'spark', as SparkConf only accepts those.  I tried a long time to 
pass them via 'conf' but found that it won't work (see 
SparkSubmitArguments.scala).  It may be possible to pass them each as another 
argument (like --hive.spark.XXX=YYY), but I think its more scalable to do it 
via properties file.

On the Remote Spark Driver side, I kept the defensive logic to provide a 
default value in case the conf object doesn't contain the property.  This may 
occur if a prop is unset. For this, I had to instantiate a HiveConf on that 
process to get the default value, as some of the timeout props need a hiveConf 
instance to do calculation on.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 9a830d2 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveSparkClientFactory.java 
334c191 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/RemoteHiveSparkClient.java 
044f189 
  spark-client/src/main/java/org/apache/hive/spark/client/RemoteDriver.java 
dab92f6 
  
spark-client/src/main/java/org/apache/hive/spark/client/SparkClientFactory.java 
5e3777a 
  spark-client/src/main/java/org/apache/hive/spark/client/SparkClientImpl.java 
851e937 
  spark-client/src/main/java/org/apache/hive/spark/client/rpc/Rpc.java ac71ae9 
  
spark-client/src/main/java/org/apache/hive/spark/client/rpc/RpcConfiguration.java
 5a826ba 
  spark-client/src/test/java/org/apache/hive/spark/client/TestSparkClient.java 
def4907 
  spark-client/src/test/java/org/apache/hive/spark/client/rpc/TestRpc.java 
a2dd3e6 

Diff: https://reviews.apache.org/r/30055/diff/


Testing
---


Thanks,

Szehon Ho



[jira] [Updated] (HIVE-9414) Fixup post HIVE-9264 - Merge encryption branch to trunk

2015-01-20 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9414:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Thank you Vikram and I apologize for my mistake. I have committed this to trunk!

 Fixup post HIVE-9264 - Merge encryption branch to trunk
 ---

 Key: HIVE-9414
 URL: https://issues.apache.org/jira/browse/HIVE-9414
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Vikram Dixit K
 Fix For: 0.15.0

 Attachments: HIVE-9414.1.patch.txt


 See 
 https://issues.apache.org/jira/browse/HIVE-9264?focusedCommentId=14283223page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14283223



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Status: Patch Available  (was: Open)

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284439#comment-14284439
 ] 

Pengcheng Xiong commented on HIVE-6617:
---

Now, bring it down from 307 to 166

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Status: Open  (was: Patch Available)

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284563#comment-14284563
 ] 

Hive QA commented on HIVE-6617:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693415/HIVE-6617.03.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2449/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2449/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2449/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2449/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ rm -rf
+ svn update
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693415 - PreCommit-HIVE-TRUNK-Build

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8966) Delta files created by hive hcatalog streaming cannot be compacted

2015-01-20 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284927#comment-14284927
 ] 

Owen O'Malley commented on HIVE-8966:
-

This looks good, Alan. +1

One minor nit is that the class javadoc for ValidReadTxnList has And instead 
of the intended An.


 Delta files created by hive hcatalog streaming cannot be compacted
 --

 Key: HIVE-8966
 URL: https://issues.apache.org/jira/browse/HIVE-8966
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.14.0
 Environment: hive
Reporter: Jihong Liu
Assignee: Alan Gates
Priority: Critical
 Fix For: 0.14.1

 Attachments: HIVE-8966.2.patch, HIVE-8966.3.patch, HIVE-8966.4.patch, 
 HIVE-8966.5.patch, HIVE-8966.patch


 hive hcatalog streaming will also create a file like bucket_n_flush_length in 
 each delta directory. Where n is the bucket number. But the 
 compactor.CompactorMR think this file also needs to compact. However this 
 file of course cannot be compacted, so compactor.CompactorMR will not 
 continue to do the compaction. 
 Did a test, after removed the bucket_n_flush_length file, then the alter 
 table partition compact finished successfully. If don't delete that file, 
 nothing will be compacted. 
 This is probably a very severity bug. Both 0.13 and 0.14 have this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Hive 0.14.1 release

2015-01-20 Thread Vaibhav Gumashta
Hi Vikram,

I'd like to get this in: HIVE-8890
https://issues.apache.org/jira/browse/HIVE-8890 [HiveServer2 dynamic
service discovery: use persistent ephemeral nodes curator recipe].

Thanks,
--Vaibhav

On Mon, Jan 19, 2015 at 9:29 PM, Alan Gates ga...@hortonworks.com wrote:

 I'd really like to get HIVE-8966 in there, since it breaks streaming
 ingest.  The patch is ready to go, it's just waiting on a review, which
 Owen has promised to do soon.

 Alan.

   Vikram Dixit K vikram.di...@gmail.com
  January 19, 2015 at 18:53
 Hi All,

 I am going to be creating the branch 1.0 as mentioned earlier, tomorrow. I
 have the following list of jiras that I want to get committed to the branch
 before creating an RC.

 HIVE-9112
 HIVE-6997 : Delete hive server 1
 HIVE-8485
 HIVE-9053

 Please let me know if you would like me to include any other jiras.

 Thanks
 Vikram.


 On Fri, Jan 16, 2015 at 1:35 PM, Vikram Dixit K vikram.di...@gmail.com
 vikram.di...@gmail.com



   Thejas Nair the...@hortonworks.com
  January 1, 2015 at 10:23
 Yes, 1.0 is a good opportunity to remove some of the deprecated
 components. The change to remove HiveServer1 is already there in trunk
 , we should include that.
 We can also use 1.0 release to clarify the public vs private status of
 some of the APIs.

 Thanks for the reminder about the documentation status of 1.0. I will
 look at some of them.


 On Wed, Dec 31, 2014 at 12:12 AM, Lefty Leverenz

   Lefty Leverenz leftylever...@gmail.com
  December 31, 2014 at 0:12
 Oh, now I get it. The 1.0.0 *branch* of Hive. Okay.

 -- Lefty

 On Tue, Dec 30, 2014 at 11:43 PM, Lefty Leverenz leftylever...@gmail.com
 leftylever...@gmail.com

   Lefty Leverenz leftylever...@gmail.com
  December 30, 2014 at 23:43
 I thought x.x.# releases were just for fixups, x.#.x could include new
 features, and #.x.x were major releases that might have some
 backward-incompatible changes. But I guess we haven't agreed on that.

 As for documentation, we still have 84 jiras with TODOC14 labels

 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
 .
 Not to mention 25 TODOC13 labels

 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
 ,
 eleven TODOC12

 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
 ,
 seven TODOC11

 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
 ,
 and seven TODOC10

 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
 .

 That's 134 doc tasks to finish for a Hive 1.0.0 release -- preferably by
 the release date, not after. Because expectations are higher for 1.0.0
 releases.


 -- Lefty

 On Tue, Dec 30, 2014 at 5:23 PM, Vikram Dixit K vikram.di...@gmail.com
 vikram.di...@gmail.com

   Vikram Dixit K vikram.di...@gmail.com
  December 30, 2014 at 17:23
 Hi Folks,

 Given that there have been a number of fixes that have gone into branch
 0.14 in the past 8 weeks, I would like to make a release of 0.14.1 soon. I
 would like to fix some of the release issues as well this time around. I am
 thinking of some time around 15th January for getting a RC out. Please let
 me know if you have any concerns. Also, from a previous thread, I would
 like to make this release the 1.0 branch of hive. The process for getting
 jiras into this release is going to be the same as the previous one viz.:

 1. Mark the jira with fix version 0.14.1 and update the status to
 blocker/critical.
 2. If a committer +1s the patch for 0.14.1, it is good to go in. Please
 mention me in the jira in case you are not sure if the jira should make it
 for 0.14.1.

 Thanks
 Vikram.


 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 

[jira] [Created] (HIVE-9423) HiveServer2: handle max handler thread exhaustion gracefully

2015-01-20 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-9423:
--

 Summary: HiveServer2: handle max handler thread exhaustion 
gracefully
 Key: HIVE-9423
 URL: https://issues.apache.org/jira/browse/HIVE-9423
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.15.0
Reporter: Vaibhav Gumashta


It has been reported that when # of active client connections is greater than   
{{hive.server2.thrift.max.worker.threads}}, HiveServer2 becomes unresponsive. 
This should be handled more gracefully by the server and the JDBC driver, so 
that the end user gets aware of the problem and can take appropriate steps 
(either close existing connections or bump of the config value or use multiple 
server instances with dynamic service discovery enabled).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9423) HiveServer2: handle max handler thread exhaustion gracefully

2015-01-20 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-9423:

Description: It has been reported that when # of client connections is 
greater than   {{hive.server2.thrift.max.worker.threads}}, HiveServer2 stops 
accepting new connections and ends up having to be restarted. This should be 
handled more gracefully by the server and the JDBC driver, so that the end user 
gets aware of the problem and can take appropriate steps (either close existing 
connections or bump of the config value or use multiple server instances with 
dynamic service discovery enabled).  (was: It has been reported that when # of 
active client connections is greater than   
{{hive.server2.thrift.max.worker.threads}}, HiveServer2 becomes unresponsive. 
This should be handled more gracefully by the server and the JDBC driver, so 
that the end user gets aware of the problem and can take appropriate steps 
(either close existing connections or bump of the config value or use multiple 
server instances with dynamic service discovery enabled).)

 HiveServer2: handle max handler thread exhaustion gracefully
 

 Key: HIVE-9423
 URL: https://issues.apache.org/jira/browse/HIVE-9423
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0, 0.13.0, 0.14.0, 0.15.0
Reporter: Vaibhav Gumashta

 It has been reported that when # of client connections is greater than   
 {{hive.server2.thrift.max.worker.threads}}, HiveServer2 stops accepting new 
 connections and ends up having to be restarted. This should be handled more 
 gracefully by the server and the JDBC driver, so that the end user gets aware 
 of the problem and can take appropriate steps (either close existing 
 connections or bump of the config value or use multiple server instances with 
 dynamic service discovery enabled).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9208) MetaStore DB schema inconsistent for MS SQL Server in use of varchar/nvarchar

2015-01-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HIVE-9208:

Attachment: HIVE-9208.2.patch

Made 2nd patch that keeps partition name and SDS location type consistent.
[~ekoifman] can you review it? Thanks!

 MetaStore DB schema inconsistent for MS SQL Server in use of varchar/nvarchar
 -

 Key: HIVE-9208
 URL: https://issues.apache.org/jira/browse/HIVE-9208
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.14.0
Reporter: Eugene Koifman
Assignee: Xiaobing Zhou
 Attachments: HIVE-9208.1.patch, HIVE-9208.2.patch


 hive-schema-0.15.0.mssql.sql has PARTITIONS.PART_NAME as NVARCHAR but 
 COMPLETED_TXN_COMPONENTS.CTC_PARTITON, COMPACTION_QUEUE.CQ_PARTITION, 
 HIVE_LOCKS.HL_PARTITION, TXN_COMPONENTS.TC_PARTITION all use VARCHAR.  This 
 cannot be right since they all store the same value.
 the same is true of hive-schema-0.14.0.mssql.sql and the two corresponding 
 hvie-txn-schema-... files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284522#comment-14284522
 ] 

Brock Noland commented on HIVE-6617:


Could we put {{nonReserved}} on seperate lines, say 80-100 chars per line? I 
think that'd help when we make a change there from a difff perspective.

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6090) Audit logs for HiveServer2

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284558#comment-14284558
 ] 

Hive QA commented on HIVE-6090:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693414/HIVE-6090.1.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2447/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2447/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2447/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2447/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ rm -rf
+ svn update
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693414 - PreCommit-HIVE-TRUNK-Build

 Audit logs for HiveServer2
 --

 Key: HIVE-6090
 URL: https://issues.apache.org/jira/browse/HIVE-6090
 Project: Hive
  Issue Type: Improvement
  Components: Diagnosability, HiveServer2
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
  Labels: audit, hiveserver
 Fix For: 0.15.0

 Attachments: HIVE-6090.1.WIP.patch, HIVE-6090.1.patch, HIVE-6090.patch


 HiveMetastore has audit logs and would like to audit all queries or requests 
 to HiveServer2 also. This will help in understanding how the APIs were used, 
 queries submitted, users etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284560#comment-14284560
 ] 

Hive QA commented on HIVE-9327:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693416/HIVE-9327.03.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2448/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2448/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2448/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2448/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ rm -rf
+ svn update
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693416 - PreCommit-HIVE-TRUNK-Build

 CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
 ---

 Key: HIVE-9327
 URL: https://issues.apache.org/jira/browse/HIVE-9327
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0

 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, 
 HIVE-9327.03.patch, HIVE-9327.patch


 ParseContext includes a map of Operator to RowResolver (OpParseContext). It 
 would be ideal to remove this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9396) date_add()/date_sub() should allow tinyint/smallint/bigint arguments in addition to int

2015-01-20 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284601#comment-14284601
 ] 

Jason Dere commented on HIVE-9396:
--

Test failures are not related. +1

 date_add()/date_sub() should allow tinyint/smallint/bigint arguments in 
 addition to int
 ---

 Key: HIVE-9396
 URL: https://issues.apache.org/jira/browse/HIVE-9396
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Sergio Peña
 Attachments: HIVE-9396.3.patch, HIVE-9396.4.patch


 {noformat}
 hive select c1, date_add('1985-01-01', c1) from short1;
 FAILED: SemanticException [Error 10014]: Line 1:11 Wrong arguments 'c1':  
 DATE_ADD() only takes INT types as second  argument, got SHORT
 {noformat}
 We should allow date_add()/date_sub() to take any integral type for the 2nd 
 argument, rather than just int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Attachment: HIVE-6617.03.patch

remove update status ambiguity

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Status: Patch Available  (was: Open)

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6090) Audit logs for HiveServer2

2015-01-20 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HIVE-6090:
---
Fix Version/s: 0.15.0
   Labels: audit hiveserver  (was: )
   Status: Patch Available  (was: Open)

 Audit logs for HiveServer2
 --

 Key: HIVE-6090
 URL: https://issues.apache.org/jira/browse/HIVE-6090
 Project: Hive
  Issue Type: Improvement
  Components: Diagnosability, HiveServer2
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
  Labels: hiveserver, audit
 Fix For: 0.15.0

 Attachments: HIVE-6090.1.WIP.patch, HIVE-6090.1.patch, HIVE-6090.patch


 HiveMetastore has audit logs and would like to audit all queries or requests 
 to HiveServer2 also. This will help in understanding how the APIs were used, 
 queries submitted, users etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Status: Open  (was: Patch Available)

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9272) Tests for utf-8 support

2015-01-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-9272:
-
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

patch 4 committed to trunk.
Thanks [~asreekumar] for the contribution and [~sushanth] for the review

 Tests for utf-8 support
 ---

 Key: HIVE-9272
 URL: https://issues.apache.org/jira/browse/HIVE-9272
 Project: Hive
  Issue Type: Test
  Components: Tests, WebHCat
Affects Versions: 0.14.0
Reporter: Aswathy Chellammal Sreekumar
Assignee: Aswathy Chellammal Sreekumar
Priority: Minor
 Fix For: 0.15.0

 Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.3.patch, 
 HIVE-9272.4.patch, HIVE-9272.patch


 Including some test cases for utf8 support in webhcat. The first four tests 
 invoke hive, pig, mapred and streaming apis for testing the utf8 support for 
 data processed, file names and job name. The last test case tests the 
 filtering of job name with utf8 character



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9402) Create GREATEST and LEAST udf

2015-01-20 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9402:
--
Status: In Progress  (was: Patch Available)

 Create GREATEST and LEAST udf
 -

 Key: HIVE-9402
 URL: https://issues.apache.org/jira/browse/HIVE-9402
 Project: Hive
  Issue Type: Task
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9402.1.patch, HIVE-9402.2.patch, HIVE-9402.3.patch, 
 HIVE-9402.4.patch, HIVE-9402.4.patch, HIVE-9402.5.patch


 GREATEST function returns the greatest value in a list of values
 Signature: T greatest(T v1, T v2, ...)
 all values should be the same type (like in COALESCE)
 LEAST returns the least value in a list of values
 Signature: T least(T v1, T v2, ...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9271) Add ability for client to request metastore to fire an event

2015-01-20 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284496#comment-14284496
 ] 

Sushanth Sowmyan commented on HIVE-9271:


btw, per the hive manual, ALTER TABLE TOUCH PARTITION seems to be a way to do 
this:

==
Alter Table/Partition Touch
ALTER TABLE table_name TOUCH [PARTITION partition_spec];
TOUCH reads the metadata, and writes it back. This has the effect of causing 
the pre/post execute hooks to fire. An example use case is if you have a hook 
that logs all the tables/partitions that were modified, along with an external 
script that alters the files on HDFS directly. Since the script modifies files 
outside of hive, the modification wouldn't be logged by the hook. The external 
script could call TOUCH to fire the hook and mark the said table or partition 
as modified.
Also, it may be useful later if we incorporate reliable last modified times. 
Then touch would update that time as well.
==

I'm not certain if what this translates to is markPartitionForEvent, because I 
haven't looked at the impl yet.

 Add ability for client to request metastore to fire an event
 

 Key: HIVE-9271
 URL: https://issues.apache.org/jira/browse/HIVE-9271
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Reporter: Alan Gates
Assignee: Alan Gates

 Currently all events in Hive are fired by the metastore.  However, there are 
 events that only the client fully understands, such as DML operations.  There 
 should be a way for the client to request the metastore to fire a particular 
 event.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284544#comment-14284544
 ] 

Pengcheng Xiong commented on HIVE-6617:
---

[~brocknoland], thanks for your suggestion. Yes, I will do accordingly in the 
final patch. Right now, I am just pushing as farthest as I can. Every patch 
before the last one is a WIP patch.

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-20 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9327:
--
Attachment: HIVE-9327.03.patch

Rebase patch.

 CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
 ---

 Key: HIVE-9327
 URL: https://issues.apache.org/jira/browse/HIVE-9327
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0

 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, 
 HIVE-9327.03.patch, HIVE-9327.patch


 ParseContext includes a map of Operator to RowResolver (OpParseContext). It 
 would be ideal to remove this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Status: Patch Available  (was: Open)

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Status: Open  (was: Patch Available)

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Attachment: HIVE-6617.04.patch

rebase and resubmit patch

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9414) Fixup post HIVE-9264 - Merge encryption branch to trunk

2015-01-20 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284635#comment-14284635
 ] 

Vikram Dixit K commented on HIVE-9414:
--

Yeah. Those failures are unrelated.

 Fixup post HIVE-9264 - Merge encryption branch to trunk
 ---

 Key: HIVE-9414
 URL: https://issues.apache.org/jira/browse/HIVE-9414
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Vikram Dixit K
 Attachments: HIVE-9414.1.patch.txt


 See 
 https://issues.apache.org/jira/browse/HIVE-9264?focusedCommentId=14283223page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14283223



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9184) Modify HCatClient to support new notification methods in HiveMetaStoreClient

2015-01-20 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-9184:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed. Thanks, Alan!

 Modify HCatClient to support new notification methods in HiveMetaStoreClient
 

 Key: HIVE-9184
 URL: https://issues.apache.org/jira/browse/HIVE-9184
 Project: Hive
  Issue Type: New Feature
  Components: HCatalog
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.15.0

 Attachments: HIVE-9184.patch


 HIVE-9174 adds a new DbNotificationListener, and methods to 
 HiveMetaStoreClient to fetch the events.  The fetching of events should be 
 added to HCatClient as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9417) Fix failing test groupby_grouping_window.q on trunk

2015-01-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9417:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

 Fix failing test groupby_grouping_window.q on trunk
 ---

 Key: HIVE-9417
 URL: https://issues.apache.org/jira/browse/HIVE-9417
 Project: Hive
  Issue Type: Test
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.15.0

 Attachments: HIVE-9417.patch


 Because of successive commits of HIVE-4809  HIVE-9347 didnt get caught it in 
 Hive QA run. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9298) Support reading alternate timestamp formats

2015-01-20 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284585#comment-14284585
 ] 

Jason Dere commented on HIVE-9298:
--

I don't think these failures are related to the patch. 
groupby_grouping_window.q and TestMTQueries have already been failing in other 
precommit tests, and I cannot reproduce the failure in udaf_histogram_numeric.q 
on either Mac or Linux.

 Support reading alternate timestamp formats
 ---

 Key: HIVE-9298
 URL: https://issues.apache.org/jira/browse/HIVE-9298
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-9298.1.patch, HIVE-9298.2.patch


 There are some users who want to be able to parse ISO-8601 timestamps, as 
 well to set their own custom timestamp formats. We may be able to support 
 this in LazySimpleSerDe through the use of a SerDe parameter to specify one 
 or more alternative timestamp patterns to use to parse timestamp values from 
 string.
 If we are doing this it might also be nice to work in support for HIVE-3844, 
 to parse numeric strings as timestamp by treating the numeric value as millis 
 since Unix epoch. This can be enabled through the SerDe params as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9422) LLAP: row-level vectorized SARGs

2015-01-20 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284671#comment-14284671
 ] 

Gopal V commented on HIVE-9422:
---

The ORC readers today apply SARGs for all readers - vectorized and row-readers.

This is for vectorized readers.

 LLAP: row-level vectorized SARGs
 

 Key: HIVE-9422
 URL: https://issues.apache.org/jira/browse/HIVE-9422
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin

 When VRBs are built from encoded data, sargs can be applied on low level to 
 reduce the number of rows to process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9337) Move more hive.spark.* configurations to HiveConf

2015-01-20 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284670#comment-14284670
 ] 

Lefty Leverenz commented on HIVE-9337:
--

+1 for the configuration parameters.  Good improvements (clarifying Hive 
client), thanks.

 Move more hive.spark.* configurations to HiveConf
 -

 Key: HIVE-9337
 URL: https://issues.apache.org/jira/browse/HIVE-9337
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-9337-spark.patch, HIVE-9337.2-spark.patch


 Some hive.spark configurations have been added to HiveConf, but there are 
 some like hive.spark.log.dir that are not there.
 Also some configurations in RpcConfiguration.java might be eligible to be 
 moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9416) Get rid of Extract Operator

2015-01-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9416:
---
Assignee: Ashutosh Chauhan
  Status: Patch Available  (was: Open)

 Get rid of Extract Operator
 ---

 Key: HIVE-9416
 URL: https://issues.apache.org/jira/browse/HIVE-9416
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-9416.patch


 {{Extract Operator}} has been there for legacy reasons. But there is no 
 functionality it provides which cant be provided by {{Select Operator}} 
 Instead of having two operators, one being subset of another we should just 
 get rid of {{Extract}} and simplify our codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9402) Create GREATEST and LEAST udf

2015-01-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284699#comment-14284699
 ] 

Hive QA commented on HIVE-9402:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693435/HIVE-9402.5.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2451/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2451/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2451/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2451/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ rm -rf
+ svn update
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693435 - PreCommit-HIVE-TRUNK-Build

 Create GREATEST and LEAST udf
 -

 Key: HIVE-9402
 URL: https://issues.apache.org/jira/browse/HIVE-9402
 Project: Hive
  Issue Type: Task
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9402.1.patch, HIVE-9402.2.patch, HIVE-9402.3.patch, 
 HIVE-9402.4.patch, HIVE-9402.4.patch, HIVE-9402.5.patch, HIVE-9402.5.patch


 GREATEST function returns the greatest value in a list of values
 Signature: T greatest(T v1, T v2, ...)
 all values should be the same type (like in COALESCE)
 LEAST returns the least value in a list of values
 Signature: T least(T v1, T v2, ...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8966) Delta files created by hive hcatalog streaming cannot be compacted

2015-01-20 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284935#comment-14284935
 ] 

Owen O'Malley commented on HIVE-8966:
-

After a little more thought, I'm worried that someone will accidentally create 
a ValidCompactorTxnList and get confused by the different behavior. I think it 
would make sense to move it into the compactor package to minimize the chance 
that someone accidentally uses it by mistake. 

 Delta files created by hive hcatalog streaming cannot be compacted
 --

 Key: HIVE-8966
 URL: https://issues.apache.org/jira/browse/HIVE-8966
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.14.0
 Environment: hive
Reporter: Jihong Liu
Assignee: Alan Gates
Priority: Critical
 Fix For: 0.14.1

 Attachments: HIVE-8966.2.patch, HIVE-8966.3.patch, HIVE-8966.4.patch, 
 HIVE-8966.5.patch, HIVE-8966.patch


 hive hcatalog streaming will also create a file like bucket_n_flush_length in 
 each delta directory. Where n is the bucket number. But the 
 compactor.CompactorMR think this file also needs to compact. However this 
 file of course cannot be compacted, so compactor.CompactorMR will not 
 continue to do the compaction. 
 Did a test, after removed the bucket_n_flush_length file, then the alter 
 table partition compact finished successfully. If don't delete that file, 
 nothing will be compacted. 
 This is probably a very severity bug. Both 0.13 and 0.14 have this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-8838) Support Parquet through HCatalog

2015-01-20 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu reassigned HIVE-8838:
--

Assignee: Ferdinand Xu

 Support Parquet through HCatalog
 

 Key: HIVE-8838
 URL: https://issues.apache.org/jira/browse/HIVE-8838
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Ferdinand Xu

 Similar to HIVE-8687 for Avro we need to fix Parquet with HCatalog.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9337) Move more hive.spark.* configurations to HiveConf [Spark Branch]

2015-01-20 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-9337:

Summary: Move more hive.spark.* configurations to HiveConf [Spark Branch]  
(was: Move more hive.spark.* configurations to HiveConf)

 Move more hive.spark.* configurations to HiveConf [Spark Branch]
 

 Key: HIVE-9337
 URL: https://issues.apache.org/jira/browse/HIVE-9337
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-9337-spark.patch, HIVE-9337.2-spark.patch


 Some hive.spark configurations have been added to HiveConf, but there are 
 some like hive.spark.log.dir that are not there.
 Also some configurations in RpcConfiguration.java might be eligible to be 
 moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9426) Merge trunk to spark 1/21/2015

2015-01-20 Thread Szehon Ho (JIRA)
Szehon Ho created HIVE-9426:
---

 Summary: Merge trunk to spark 1/21/2015
 Key: HIVE-9426
 URL: https://issues.apache.org/jira/browse/HIVE-9426
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Szehon Ho
Assignee: Szehon Ho






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >