[jira] [Commented] (HIVE-2055) Hive should add HBase classpath dependencies when available

2013-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824586#comment-13824586
 ] 

Hudson commented on HIVE-2055:
--

SUCCESS: Integrated in HBase-0.94-security #337 (See 
[https://builds.apache.org/job/HBase-0.94-security/337/])
HBASE-9165 [mapreduce] Modularize building dependency jars

Separate adding HBase and dependencies from adding other job dependencies, and
expose it as a separate method that other projects can use (for PIG-3285,
HIVE-2055). (ndimiduk: rev 1542414)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java


> Hive should add HBase classpath dependencies when available
> ---
>
> Key: HIVE-2055
> URL: https://issues.apache.org/jira/browse/HIVE-2055
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.10.0
>Reporter: sajith v
> Attachments: 
> 0001-HIVE-2055-include-hbase-dependencies-in-launch-scrip.patch, 
> HIVE-2055.patch
>
>
> Created an external table in hive , which points to the HBase table. When 
> tried to query a column using the column name in select clause got the 
> following exception : ( java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat), errorCode:12, 
> SQLState:42000)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-2055) Hive should add HBase classpath dependencies when available

2013-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824558#comment-13824558
 ] 

Hudson commented on HIVE-2055:
--

FAILURE: Integrated in HBase-0.94 #1203 (See 
[https://builds.apache.org/job/HBase-0.94/1203/])
HBASE-9165 [mapreduce] Modularize building dependency jars

Separate adding HBase and dependencies from adding other job dependencies, and
expose it as a separate method that other projects can use (for PIG-3285,
HIVE-2055). (ndimiduk: rev 1542414)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java


> Hive should add HBase classpath dependencies when available
> ---
>
> Key: HIVE-2055
> URL: https://issues.apache.org/jira/browse/HIVE-2055
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.10.0
>Reporter: sajith v
> Attachments: 
> 0001-HIVE-2055-include-hbase-dependencies-in-launch-scrip.patch, 
> HIVE-2055.patch
>
>
> Created an external table in hive , which points to the HBase table. When 
> tried to query a column using the column name in select clause got the 
> following exception : ( java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat), errorCode:12, 
> SQLState:42000)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5601) NPE in ORC's PPD when using select * from table with where predicate

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824409#comment-13824409
 ] 

Hudson commented on HIVE-5601:
--

FAILURE: Integrated in Hive-branch-0.12-hadoop2 #23 (See 
[https://builds.apache.org/job/Hive-branch-0.12-hadoop2/23/])
HIVE-5601: NPE in ORC's PPD when using select * from table with where 
predicate pushdown (Prasanth J via Owen O'Malley and Gunther Hagleitner) 
(omalley: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542024)
* 
/hive/branches/branch-0.12/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ReaderImpl.java
* 
/hive/branches/branch-0.12/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* 
/hive/branches/branch-0.12/ql/src/test/results/clientpositive/orc_predicate_pushdown.q.out


> NPE in ORC's PPD  when using select * from table with where predicate
> -
>
> Key: HIVE-5601
> URL: https://issues.apache.org/jira/browse/HIVE-5601
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>Priority: Critical
>  Labels: ORC
> Fix For: 0.13.0, 0.12.1
>
> Attachments: HIVE-5601.4-branch-0.12.patch.txt, 
> HIVE-5601.5.patch.txt, HIVE-5601.branch-0.12.2.patch.txt, 
> HIVE-5601.branch-0.12.3.patch.txt, HIVE-5601.branch-0.12.4.patch.txt, 
> HIVE-5601.branch-12.1.patch.txt, HIVE-5601.trunk.1.patch.txt, 
> HIVE-5601.trunk.2.patch.txt, HIVE-5601.trunk.3.patch.txt, 
> HIVE-5601.trunk.4.patch.txt, HIVE-5601.trunk.5.patch.txt
>
>
> ORCInputFormat has a method findIncludedColumns() which returns boolean array 
> of included columns. In case of the following query 
> {code}select * from qlog_orc where id<1000 limit 10;{code}
>  where all columns are selected the findIncludedColumns() returns null. This 
> will result in a NPE when PPD is enabled. Following is the stack trace
> {code}Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.planReadPartialDataStreams(RecordReaderImpl.java:2387)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:2543)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:2200)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:2573)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:2615)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:132)
>   at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rows(ReaderImpl.java:348)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.(OrcInputFormat.java:99)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:241)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:237)
>   ... 8 more{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5601) NPE in ORC's PPD when using select * from table with where predicate

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823765#comment-13823765
 ] 

Hudson commented on HIVE-5601:
--

FAILURE: Integrated in Hive-branch-0.12-hadoop1 #32 (See 
[https://builds.apache.org/job/Hive-branch-0.12-hadoop1/32/])
HIVE-5601: NPE in ORC's PPD when using select * from table with where 
predicate pushdown (Prasanth J via Owen O'Malley and Gunther Hagleitner) 
(omalley: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542024)
* 
/hive/branches/branch-0.12/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ReaderImpl.java
* 
/hive/branches/branch-0.12/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* 
/hive/branches/branch-0.12/ql/src/test/results/clientpositive/orc_predicate_pushdown.q.out


> NPE in ORC's PPD  when using select * from table with where predicate
> -
>
> Key: HIVE-5601
> URL: https://issues.apache.org/jira/browse/HIVE-5601
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>Priority: Critical
>  Labels: ORC
> Fix For: 0.13.0, 0.12.1
>
> Attachments: HIVE-5601.4-branch-0.12.patch.txt, 
> HIVE-5601.5.patch.txt, HIVE-5601.branch-0.12.2.patch.txt, 
> HIVE-5601.branch-0.12.3.patch.txt, HIVE-5601.branch-0.12.4.patch.txt, 
> HIVE-5601.branch-12.1.patch.txt, HIVE-5601.trunk.1.patch.txt, 
> HIVE-5601.trunk.2.patch.txt, HIVE-5601.trunk.3.patch.txt, 
> HIVE-5601.trunk.4.patch.txt, HIVE-5601.trunk.5.patch.txt
>
>
> ORCInputFormat has a method findIncludedColumns() which returns boolean array 
> of included columns. In case of the following query 
> {code}select * from qlog_orc where id<1000 limit 10;{code}
>  where all columns are selected the findIncludedColumns() returns null. This 
> will result in a NPE when PPD is enabled. Following is the stack trace
> {code}Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.planReadPartialDataStreams(RecordReaderImpl.java:2387)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:2543)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:2200)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:2573)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:2615)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:132)
>   at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rows(ReaderImpl.java:348)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.(OrcInputFormat.java:99)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:241)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:237)
>   ... 8 more{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5648) error when casting partition column to varchar in where clause

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807434#comment-13807434
 ] 

Hudson commented on HIVE-5648:
--

ABORTED: Integrated in Hive-trunk-h0.21 #2427 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2427/])
HIVE-5648 : error when casting partition column to varchar in where clause 
(Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536471)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeConstantDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/partition_varchar2.q
* /hive/trunk/ql/src/test/results/clientpositive/partition_varchar2.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorFactory.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoFactory.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/primitive/TestPrimitiveObjectInspectorFactory.java


> error when casting partition column to varchar in where clause 
> ---
>
> Key: HIVE-5648
> URL: https://issues.apache.org/jira/browse/HIVE-5648
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5648.1.patch, HIVE-5648.2.patch
>
>
> hive> select * from partition_varchar_2 where cast(dt as varchar(10)) = 
> '2000-01-01';
> FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.RuntimeException: Internal error: Cannot find ObjectInspector  for 
> VARCHAR



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807429#comment-13807429
 ] 

Hudson commented on HIVE-5295:
--

ABORTED: Integrated in Hive-trunk-h0.21 #2427 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2427/])
HIVE-5295 : HiveConnection#configureConnection tries to execute statement even 
after it is closed (Vaibhav Gumashta via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536533)
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


> HiveConnection#configureConnection tries to execute statement even after it 
> is closed
> -
>
> Key: HIVE-5295
> URL: https://issues.apache.org/jira/browse/HIVE-5295
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: D12957.1.patch, D12957.2.patch, D12957.3.patch, 
> HIVE-5295.5.patch, HIVE-5295.D12957.3.patch, HIVE-5295.D12957.3.patch, 
> HIVE-5295.D12957.4.patch
>
>
> HiveConnection#configureConnection tries to execute statement even after it 
> is closed. For remote JDBC client, it tries to set the conf var using 'set 
> foo=bar' by calling HiveStatement.execute for each conf var pair, but closes 
> the statement after the 1st iteration through the conf var pairs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5554) add more comments to CombineHiveInputFormat.java, BucketizedHiveInputFormat.java

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807435#comment-13807435
 ] 

Hudson commented on HIVE-5554:
--

ABORTED: Integrated in Hive-trunk-h0.21 #2427 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2427/])
HIVE-5554 : add more comments to CombineHiveInputFormat.java, 
BucketizedHiveInputFormat.java (Thejas Nair via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536517)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/BucketizedHiveInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java


> add more comments to CombineHiveInputFormat.java, 
> BucketizedHiveInputFormat.java
> 
>
> Key: HIVE-5554
> URL: https://issues.apache.org/jira/browse/HIVE-5554
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-5554.1.patch
>
>
> more comments to CombineHiveInputFormat.java, BucketizedHiveInputFormat.java 
> to make it easier to understand.
> NO PRECOMMIT TESTS (tested build locally)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5576) Blank lines missing from .q.out files created on Windows for testcase=TestCliDriver

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807433#comment-13807433
 ] 

Hudson commented on HIVE-5576:
--

ABORTED: Integrated in Hive-trunk-h0.21 #2427 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2427/])
HIVE-5576 : Blank lines missing from .q.out files created on Windows for 
testcase=TestCliDriver (Remus Rusanu via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536426)
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java


> Blank lines missing from .q.out files created on Windows for 
> testcase=TestCliDriver
> ---
>
> Key: HIVE-5576
> URL: https://issues.apache.org/jira/browse/HIVE-5576
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Affects Versions: 0.13.0
> Environment: Windows 8 using Hive Monarch build environment
>Reporter: Eric Hanson
>Assignee: Remus Rusanu
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-5576.1.patch, vectorized_math_funcs.q, 
> vectorized_math_funcs.q.out.unix, vectorized_math_funcs.q.out.windows
>
>
> If you create a .q.out file on Windows using a command like this:
> ant test "-Dhadoop.security.version=1.1.0-SNAPSHOT" 
> "-Dhadoop.root=c:\hw\project\hadoop-monarch" "-Dresolvers=internal" 
> "-Dhadoop-0.20S.version=1.1.0-SNAPSHOT" "-Dhadoop.mr.rev=20S" 
> "-Dhive.support.concurrency=false" "-Dshims.include=0.20S" 
> "-Dtest.continue.on.failure=true" "-Dtest.halt.on.failure=no" 
> "-Dtest.print.classpath=true"  "-Dtestcase=TestCliDriver" 
> "-Dqfile=vectorized_math_funcs.q,vectorized_string_funcs.q,vectorized_casts.q"
>  "-Doverwrite=true" "-Dtest.silent=false"
> Then the .q.out files generated in the hive directory under
> ql\src\test\results\clientpositive
> having missing blank lines.
> So, the .q tests will pass on your Windows machine. But when you upload them 
> in a patch, they fail on the automated build server. See HIVE-5517 for an 
> example. HIVE-5517.3.patch has .q.out files with missing blank lines. 
> Hive-5517.4.patch has .q.out files created on a Linux or Mac system. Those 
> have blank lines.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807428#comment-13807428
 ] 

Hudson commented on HIVE-5666:
--

ABORTED: Integrated in Hive-trunk-h0.21 #2427 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2427/])
HIVE-5666 : use Path instead of String for IOContext.inputPath (Thejas Nair via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536478)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveContextAwareRecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/IOContext.java


> use Path instead of String for IOContext.inputPath
> --
>
> Key: HIVE-5666
> URL: https://issues.apache.org/jira/browse/HIVE-5666
> Project: Hive
>  Issue Type: Improvement
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-5666.1.patch
>
>
> Path is converted to string in HiveContextAwareRecordReader to be stored in 
> IOContext.inputPath, then in MapOperator normalizePath gets called on it 
> which converts it back to Path. 
> Path creation is expensive, so it is better to use Path instead of string 
> through the call stack.
> This is also a step towards HIVE-3616.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5450) pTest2 TestReportParser is failing due to .svn directory

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807430#comment-13807430
 ] 

Hudson commented on HIVE-5450:
--

ABORTED: Integrated in Hive-trunk-h0.21 #2427 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2427/])
HIVE-5450 - pTest2 TestReportParser is failing due to .svn directory (Brock 
Noland reviewed by Ashutosh Chauhan) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536508)
* 
/hive/trunk/testutils/ptest2/src/test/java/org/apache/hive/ptest/execution/TestReportParser.java


> pTest2 TestReportParser is failing due to .svn directory
> 
>
> Key: HIVE-5450
> URL: https://issues.apache.org/jira/browse/HIVE-5450
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Affects Versions: 0.13.0
>Reporter: Ashutosh Chauhan
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5450.patch
>
>
> Following test failed when I ran mvn package:
> {code}
> Running org.apache.hive.ptest.execution.TestPhase
> 2013-10-04 22:57:20,150 ERROR HostExecutor$5.call:379 Aborting drone during 
> exec echo org.apache.hive.ptest.execution.AbortDroneException: Drone Drone 
> [user=someuser, host=somehost, instance=0] exited with 255: SSHResult 
> [command=echo, getExitCode()=255, getException()=null, getUser()=someuser, 
> getHost()=somehost, getInstance()=0]
>   at 
> org.apache.hive.ptest.execution.HostExecutor$5.call(HostExecutor.java:379)
>   at 
> org.apache.hive.ptest.execution.HostExecutor$5.call(HostExecutor.java:368)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> Tests in error: 
>   test(org.apache.hive.ptest.execution.TestReportParser): 
> src/test/resources/test-outputs/.svn (Is a directory)
> Tests run: 44, Failures: 0, Errors: 1, Skipped: 0
> {code}
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5653) Vectorized Shuffle Join produces incorrect results

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807432#comment-13807432
 ] 

Hudson commented on HIVE-5653:
--

ABORTED: Integrated in Hive-trunk-h0.21 #2427 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2427/])
HIVE-5653 : Vectorized Shuffle Join produces incorrect results (Remus Rusanu 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536474)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorReduceSinkOperator.java
* /hive/trunk/ql/src/test/queries/clientpositive/vectorized_shufflejoin.q
* /hive/trunk/ql/src/test/results/clientpositive/vectorized_shufflejoin.q.out


> Vectorized Shuffle Join produces incorrect results
> --
>
> Key: HIVE-5653
> URL: https://issues.apache.org/jira/browse/HIVE-5653
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Fix For: 0.13.0
>
> Attachments: HIVE-5653.1.patch
>
>
> Vectorized shuffle join should work out-of-the-box, but it produces empty 
> result set. Investigating.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5656) Hive produces unclear, confusing SemanticException when dealing with mod or pmod by zero

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807431#comment-13807431
 ] 

Hudson commented on HIVE-5656:
--

ABORTED: Integrated in Hive-trunk-h0.21 #2427 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2427/])
HIVE-5656 : Hive produces unclear, confusing SemanticException when dealing 
with mod or pmod by zero (Xuefu Zhang via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536476)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMod.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFPosMod.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFOPMod.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFPosMod.java
* /hive/trunk/ql/src/test/results/clientpositive/vectorization_14.q.out


> Hive produces unclear, confusing SemanticException when dealing with mod or 
> pmod by zero
> 
>
> Key: HIVE-5656
> URL: https://issues.apache.org/jira/browse/HIVE-5656
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5656.patch
>
>
> {code}
> hive> select 5%0 from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFOPMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFOPMod@21b594a9 of class 
> org.apache.hadoop.hive.ql.udf.UDFOPMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> hive> select pmod(5,0) from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFPosMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFPosMod@174ed99a of class 
> org.apache.hadoop.hive.ql.udf.UDFPosMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> {code}
> Exception stack:
> {code}
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1112)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:181)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:8870)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:8826)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2734)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2531)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7606)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7562)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8365)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8591)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:351)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1004)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
> at 
> org.apache.hadoop.hive.cli.Cl

[jira] [Commented] (HIVE-5576) Blank lines missing from .q.out files created on Windows for testcase=TestCliDriver

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807406#comment-13807406
 ] 

Hudson commented on HIVE-5576:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #217 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/217/])
HIVE-5576 : Blank lines missing from .q.out files created on Windows for 
testcase=TestCliDriver (Remus Rusanu via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536426)
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java


> Blank lines missing from .q.out files created on Windows for 
> testcase=TestCliDriver
> ---
>
> Key: HIVE-5576
> URL: https://issues.apache.org/jira/browse/HIVE-5576
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Affects Versions: 0.13.0
> Environment: Windows 8 using Hive Monarch build environment
>Reporter: Eric Hanson
>Assignee: Remus Rusanu
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-5576.1.patch, vectorized_math_funcs.q, 
> vectorized_math_funcs.q.out.unix, vectorized_math_funcs.q.out.windows
>
>
> If you create a .q.out file on Windows using a command like this:
> ant test "-Dhadoop.security.version=1.1.0-SNAPSHOT" 
> "-Dhadoop.root=c:\hw\project\hadoop-monarch" "-Dresolvers=internal" 
> "-Dhadoop-0.20S.version=1.1.0-SNAPSHOT" "-Dhadoop.mr.rev=20S" 
> "-Dhive.support.concurrency=false" "-Dshims.include=0.20S" 
> "-Dtest.continue.on.failure=true" "-Dtest.halt.on.failure=no" 
> "-Dtest.print.classpath=true"  "-Dtestcase=TestCliDriver" 
> "-Dqfile=vectorized_math_funcs.q,vectorized_string_funcs.q,vectorized_casts.q"
>  "-Doverwrite=true" "-Dtest.silent=false"
> Then the .q.out files generated in the hive directory under
> ql\src\test\results\clientpositive
> having missing blank lines.
> So, the .q tests will pass on your Windows machine. But when you upload them 
> in a patch, they fail on the automated build server. See HIVE-5517 for an 
> example. HIVE-5517.3.patch has .q.out files with missing blank lines. 
> Hive-5517.4.patch has .q.out files created on a Linux or Mac system. Those 
> have blank lines.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5656) Hive produces unclear, confusing SemanticException when dealing with mod or pmod by zero

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807404#comment-13807404
 ] 

Hudson commented on HIVE-5656:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #217 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/217/])
HIVE-5656 : Hive produces unclear, confusing SemanticException when dealing 
with mod or pmod by zero (Xuefu Zhang via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536476)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMod.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFPosMod.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFOPMod.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFPosMod.java
* /hive/trunk/ql/src/test/results/clientpositive/vectorization_14.q.out


> Hive produces unclear, confusing SemanticException when dealing with mod or 
> pmod by zero
> 
>
> Key: HIVE-5656
> URL: https://issues.apache.org/jira/browse/HIVE-5656
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5656.patch
>
>
> {code}
> hive> select 5%0 from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFOPMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFOPMod@21b594a9 of class 
> org.apache.hadoop.hive.ql.udf.UDFOPMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> hive> select pmod(5,0) from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFPosMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFPosMod@174ed99a of class 
> org.apache.hadoop.hive.ql.udf.UDFPosMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> {code}
> Exception stack:
> {code}
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1112)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:181)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:8870)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:8826)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2734)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2531)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7606)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7562)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8365)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8591)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:351)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1004)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
> at 
> org.apache.hado

[jira] [Commented] (HIVE-5648) error when casting partition column to varchar in where clause

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807407#comment-13807407
 ] 

Hudson commented on HIVE-5648:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #217 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/217/])
HIVE-5648 : error when casting partition column to varchar in where clause 
(Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536471)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeConstantDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/partition_varchar2.q
* /hive/trunk/ql/src/test/results/clientpositive/partition_varchar2.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorFactory.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoFactory.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/primitive/TestPrimitiveObjectInspectorFactory.java


> error when casting partition column to varchar in where clause 
> ---
>
> Key: HIVE-5648
> URL: https://issues.apache.org/jira/browse/HIVE-5648
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5648.1.patch, HIVE-5648.2.patch
>
>
> hive> select * from partition_varchar_2 where cast(dt as varchar(10)) = 
> '2000-01-01';
> FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.RuntimeException: Internal error: Cannot find ObjectInspector  for 
> VARCHAR



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807402#comment-13807402
 ] 

Hudson commented on HIVE-5666:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #217 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/217/])
HIVE-5666 : use Path instead of String for IOContext.inputPath (Thejas Nair via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536478)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveContextAwareRecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/IOContext.java


> use Path instead of String for IOContext.inputPath
> --
>
> Key: HIVE-5666
> URL: https://issues.apache.org/jira/browse/HIVE-5666
> Project: Hive
>  Issue Type: Improvement
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-5666.1.patch
>
>
> Path is converted to string in HiveContextAwareRecordReader to be stored in 
> IOContext.inputPath, then in MapOperator normalizePath gets called on it 
> which converts it back to Path. 
> Path creation is expensive, so it is better to use Path instead of string 
> through the call stack.
> This is also a step towards HIVE-3616.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5653) Vectorized Shuffle Join produces incorrect results

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807405#comment-13807405
 ] 

Hudson commented on HIVE-5653:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #217 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/217/])
HIVE-5653 : Vectorized Shuffle Join produces incorrect results (Remus Rusanu 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536474)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorReduceSinkOperator.java
* /hive/trunk/ql/src/test/queries/clientpositive/vectorized_shufflejoin.q
* /hive/trunk/ql/src/test/results/clientpositive/vectorized_shufflejoin.q.out


> Vectorized Shuffle Join produces incorrect results
> --
>
> Key: HIVE-5653
> URL: https://issues.apache.org/jira/browse/HIVE-5653
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Fix For: 0.13.0
>
> Attachments: HIVE-5653.1.patch
>
>
> Vectorized shuffle join should work out-of-the-box, but it produces empty 
> result set. Investigating.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807403#comment-13807403
 ] 

Hudson commented on HIVE-5667:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #217 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/217/])
HIVE-5667 - ThriftCLIService log messages jumbled up (Vaibhav Gumashta via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536361)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java


> ThriftCLIService log messages jumbled up
> 
>
> Key: HIVE-5667
> URL: https://issues.apache.org/jira/browse/HIVE-5667
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5667.1.patch
>
>
> ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5656) Hive produces unclear, confusing SemanticException when dealing with mod or pmod by zero

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807270#comment-13807270
 ] 

Hudson commented on HIVE-5656:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #526 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/526/])
HIVE-5656 : Hive produces unclear, confusing SemanticException when dealing 
with mod or pmod by zero (Xuefu Zhang via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536476)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMod.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFPosMod.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFOPMod.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFPosMod.java
* /hive/trunk/ql/src/test/results/clientpositive/vectorization_14.q.out


> Hive produces unclear, confusing SemanticException when dealing with mod or 
> pmod by zero
> 
>
> Key: HIVE-5656
> URL: https://issues.apache.org/jira/browse/HIVE-5656
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5656.patch
>
>
> {code}
> hive> select 5%0 from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFOPMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFOPMod@21b594a9 of class 
> org.apache.hadoop.hive.ql.udf.UDFOPMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> hive> select pmod(5,0) from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFPosMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFPosMod@174ed99a of class 
> org.apache.hadoop.hive.ql.udf.UDFPosMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> {code}
> Exception stack:
> {code}
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1112)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:181)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:8870)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:8826)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2734)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2531)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7606)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7562)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8365)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8591)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:351)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1004)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
> at 
> org.apache.hadoop.hive.cli.

[jira] [Commented] (HIVE-5554) add more comments to CombineHiveInputFormat.java, BucketizedHiveInputFormat.java

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807274#comment-13807274
 ] 

Hudson commented on HIVE-5554:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #526 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/526/])
HIVE-5554 : add more comments to CombineHiveInputFormat.java, 
BucketizedHiveInputFormat.java (Thejas Nair via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536517)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/BucketizedHiveInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java


> add more comments to CombineHiveInputFormat.java, 
> BucketizedHiveInputFormat.java
> 
>
> Key: HIVE-5554
> URL: https://issues.apache.org/jira/browse/HIVE-5554
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-5554.1.patch
>
>
> more comments to CombineHiveInputFormat.java, BucketizedHiveInputFormat.java 
> to make it easier to understand.
> NO PRECOMMIT TESTS (tested build locally)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5653) Vectorized Shuffle Join produces incorrect results

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807271#comment-13807271
 ] 

Hudson commented on HIVE-5653:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #526 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/526/])
HIVE-5653 : Vectorized Shuffle Join produces incorrect results (Remus Rusanu 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536474)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorReduceSinkOperator.java
* /hive/trunk/ql/src/test/queries/clientpositive/vectorized_shufflejoin.q
* /hive/trunk/ql/src/test/results/clientpositive/vectorized_shufflejoin.q.out


> Vectorized Shuffle Join produces incorrect results
> --
>
> Key: HIVE-5653
> URL: https://issues.apache.org/jira/browse/HIVE-5653
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Fix For: 0.13.0
>
> Attachments: HIVE-5653.1.patch
>
>
> Vectorized shuffle join should work out-of-the-box, but it produces empty 
> result set. Investigating.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5648) error when casting partition column to varchar in where clause

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807273#comment-13807273
 ] 

Hudson commented on HIVE-5648:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #526 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/526/])
HIVE-5648 : error when casting partition column to varchar in where clause 
(Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536471)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeConstantDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/partition_varchar2.q
* /hive/trunk/ql/src/test/results/clientpositive/partition_varchar2.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorFactory.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoFactory.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/primitive/TestPrimitiveObjectInspectorFactory.java


> error when casting partition column to varchar in where clause 
> ---
>
> Key: HIVE-5648
> URL: https://issues.apache.org/jira/browse/HIVE-5648
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5648.1.patch, HIVE-5648.2.patch
>
>
> hive> select * from partition_varchar_2 where cast(dt as varchar(10)) = 
> '2000-01-01';
> FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.RuntimeException: Internal error: Cannot find ObjectInspector  for 
> VARCHAR



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5576) Blank lines missing from .q.out files created on Windows for testcase=TestCliDriver

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807272#comment-13807272
 ] 

Hudson commented on HIVE-5576:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #526 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/526/])
HIVE-5576 : Blank lines missing from .q.out files created on Windows for 
testcase=TestCliDriver (Remus Rusanu via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536426)
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java


> Blank lines missing from .q.out files created on Windows for 
> testcase=TestCliDriver
> ---
>
> Key: HIVE-5576
> URL: https://issues.apache.org/jira/browse/HIVE-5576
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Affects Versions: 0.13.0
> Environment: Windows 8 using Hive Monarch build environment
>Reporter: Eric Hanson
>Assignee: Remus Rusanu
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-5576.1.patch, vectorized_math_funcs.q, 
> vectorized_math_funcs.q.out.unix, vectorized_math_funcs.q.out.windows
>
>
> If you create a .q.out file on Windows using a command like this:
> ant test "-Dhadoop.security.version=1.1.0-SNAPSHOT" 
> "-Dhadoop.root=c:\hw\project\hadoop-monarch" "-Dresolvers=internal" 
> "-Dhadoop-0.20S.version=1.1.0-SNAPSHOT" "-Dhadoop.mr.rev=20S" 
> "-Dhive.support.concurrency=false" "-Dshims.include=0.20S" 
> "-Dtest.continue.on.failure=true" "-Dtest.halt.on.failure=no" 
> "-Dtest.print.classpath=true"  "-Dtestcase=TestCliDriver" 
> "-Dqfile=vectorized_math_funcs.q,vectorized_string_funcs.q,vectorized_casts.q"
>  "-Doverwrite=true" "-Dtest.silent=false"
> Then the .q.out files generated in the hive directory under
> ql\src\test\results\clientpositive
> having missing blank lines.
> So, the .q tests will pass on your Windows machine. But when you upload them 
> in a patch, they fail on the automated build server. See HIVE-5517 for an 
> example. HIVE-5517.3.patch has .q.out files with missing blank lines. 
> Hive-5517.4.patch has .q.out files created on a Linux or Mac system. Those 
> have blank lines.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807275#comment-13807275
 ] 

Hudson commented on HIVE-5667:
--

ABORTED: Integrated in Hive-trunk-h0.21 #2426 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2426/])
HIVE-5667 - ThriftCLIService log messages jumbled up (Vaibhav Gumashta via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536361)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java


> ThriftCLIService log messages jumbled up
> 
>
> Key: HIVE-5667
> URL: https://issues.apache.org/jira/browse/HIVE-5667
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5667.1.patch
>
>
> ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5450) pTest2 TestReportParser is failing due to .svn directory

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807269#comment-13807269
 ] 

Hudson commented on HIVE-5450:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #526 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/526/])
HIVE-5450 - pTest2 TestReportParser is failing due to .svn directory (Brock 
Noland reviewed by Ashutosh Chauhan) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536508)
* 
/hive/trunk/testutils/ptest2/src/test/java/org/apache/hive/ptest/execution/TestReportParser.java


> pTest2 TestReportParser is failing due to .svn directory
> 
>
> Key: HIVE-5450
> URL: https://issues.apache.org/jira/browse/HIVE-5450
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Affects Versions: 0.13.0
>Reporter: Ashutosh Chauhan
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5450.patch
>
>
> Following test failed when I ran mvn package:
> {code}
> Running org.apache.hive.ptest.execution.TestPhase
> 2013-10-04 22:57:20,150 ERROR HostExecutor$5.call:379 Aborting drone during 
> exec echo org.apache.hive.ptest.execution.AbortDroneException: Drone Drone 
> [user=someuser, host=somehost, instance=0] exited with 255: SSHResult 
> [command=echo, getExitCode()=255, getException()=null, getUser()=someuser, 
> getHost()=somehost, getInstance()=0]
>   at 
> org.apache.hive.ptest.execution.HostExecutor$5.call(HostExecutor.java:379)
>   at 
> org.apache.hive.ptest.execution.HostExecutor$5.call(HostExecutor.java:368)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> Tests in error: 
>   test(org.apache.hive.ptest.execution.TestReportParser): 
> src/test/resources/test-outputs/.svn (Is a directory)
> Tests run: 44, Failures: 0, Errors: 1, Skipped: 0
> {code}
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807267#comment-13807267
 ] 

Hudson commented on HIVE-5666:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #526 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/526/])
HIVE-5666 : use Path instead of String for IOContext.inputPath (Thejas Nair via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536478)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveContextAwareRecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/IOContext.java


> use Path instead of String for IOContext.inputPath
> --
>
> Key: HIVE-5666
> URL: https://issues.apache.org/jira/browse/HIVE-5666
> Project: Hive
>  Issue Type: Improvement
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-5666.1.patch
>
>
> Path is converted to string in HiveContextAwareRecordReader to be stored in 
> IOContext.inputPath, then in MapOperator normalizePath gets called on it 
> which converts it back to Path. 
> Path creation is expensive, so it is better to use Path instead of string 
> through the call stack.
> This is also a step towards HIVE-3616.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807268#comment-13807268
 ] 

Hudson commented on HIVE-5667:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #526 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/526/])
HIVE-5667 - ThriftCLIService log messages jumbled up (Vaibhav Gumashta via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536361)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java


> ThriftCLIService log messages jumbled up
> 
>
> Key: HIVE-5667
> URL: https://issues.apache.org/jira/browse/HIVE-5667
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5667.1.patch
>
>
> ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807053#comment-13807053
 ] 

Hudson commented on HIVE-5667:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #154 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/154/])
HIVE-5667 - ThriftCLIService log messages jumbled up (Vaibhav Gumashta via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536361)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java


> ThriftCLIService log messages jumbled up
> 
>
> Key: HIVE-5667
> URL: https://issues.apache.org/jira/browse/HIVE-5667
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5667.1.patch
>
>
> ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806578#comment-13806578
 ] 

Hudson commented on HIVE-3976:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #524 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/524/])
HIVE-3976 - Support specifying scale and precision with Hive decimal type 
(Xuefu Zhang via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536151)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java
* /hive/trunk/data/files/kv9.txt
* 
/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericOpMethodResolver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestHiveDecimalParse.java
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_4.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_5.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_6.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_join.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_udf.q
* /hive/trunk/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* /hive/trunk/ql/src/test/queries/clientpositive/ptf_decimal.q
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_pmod.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_double.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_float.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_string.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_expressions.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_multipartitioning.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_navfn.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_ntile.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_rank.q
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_1.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_2.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_4.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_5.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/wrong_column_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_udf.q.out
* /hive/t

[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806579#comment-13806579
 ] 

Hudson commented on HIVE-4974:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #524 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/524/])
HIVE-4974 - JDBC2 statements and result sets are not able to return their 
parents (Chris Dome via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536153)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974.2-trunk.patch.txt, 
> HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
> HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806536#comment-13806536
 ] 

Hudson commented on HIVE-4974:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2424 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2424/])
HIVE-4974 - JDBC2 statements and result sets are not able to return their 
parents (Chris Dome via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536153)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974.2-trunk.patch.txt, 
> HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
> HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806535#comment-13806535
 ] 

Hudson commented on HIVE-3976:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2424 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2424/])
HIVE-3976 - Support specifying scale and precision with Hive decimal type 
(Xuefu Zhang via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536151)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java
* /hive/trunk/data/files/kv9.txt
* 
/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericOpMethodResolver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestHiveDecimalParse.java
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_4.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_5.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_6.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_join.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_udf.q
* /hive/trunk/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* /hive/trunk/ql/src/test/queries/clientpositive/ptf_decimal.q
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_pmod.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_double.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_float.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_string.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_expressions.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_multipartitioning.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_navfn.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_ntile.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_rank.q
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_1.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_2.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_4.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_5.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/wrong_column_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_udf.q.out
* /hive/tru

[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806432#comment-13806432
 ] 

Hudson commented on HIVE-4974:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #216 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/216/])
HIVE-4974 - JDBC2 statements and result sets are not able to return their 
parents (Chris Dome via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536153)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974.2-trunk.patch.txt, 
> HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
> HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806431#comment-13806431
 ] 

Hudson commented on HIVE-3976:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #216 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/216/])
HIVE-3976 - Support specifying scale and precision with Hive decimal type 
(Xuefu Zhang via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536151)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java
* /hive/trunk/data/files/kv9.txt
* 
/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericOpMethodResolver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestHiveDecimalParse.java
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_4.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_5.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_6.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_join.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_udf.q
* /hive/trunk/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* /hive/trunk/ql/src/test/queries/clientpositive/ptf_decimal.q
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_pmod.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_double.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_float.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_string.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_expressions.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_multipartitioning.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_navfn.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_ntile.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_rank.q
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_1.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_2.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_4.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_5.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/wrong_column_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_udf.q.o

[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806375#comment-13806375
 ] 

Hudson commented on HIVE-3976:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #153 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/153/])
HIVE-3976 - Support specifying scale and precision with Hive decimal type 
(Xuefu Zhang via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536151)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java
* /hive/trunk/data/files/kv9.txt
* 
/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericOpMethodResolver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestHiveDecimalParse.java
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_4.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_5.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_6.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_join.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_udf.q
* /hive/trunk/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* /hive/trunk/ql/src/test/queries/clientpositive/ptf_decimal.q
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_pmod.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_double.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_float.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_string.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_expressions.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_multipartitioning.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_navfn.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_ntile.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_rank.q
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_1.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_2.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_4.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_5.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/wrong_column_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_udf.q.o

[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806376#comment-13806376
 ] 

Hudson commented on HIVE-4974:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #153 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/153/])
HIVE-4974 - JDBC2 statements and result sets are not able to return their 
parents (Chris Dome via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1536153)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974.2-trunk.patch.txt, 
> HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
> HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5619) Allow concat() to accept mixed string/binary args

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806278#comment-13806278
 ] 

Hudson commented on HIVE-5619:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5619 : Allow concat() to accept mixed string/binary args (Jason Dere via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535621)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_concat.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_concat.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java


> Allow concat() to accept mixed string/binary args
> -
>
> Key: HIVE-5619
> URL: https://issues.apache.org/jira/browse/HIVE-5619
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5619.1.patch
>
>
> concat() is currently strict about allowing either all binary or all 
> non-binary arguments. Loosen this to permit mixed params.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5511) percentComplete returned by job status from WebHCat is null

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806276#comment-13806276
 ] 

Hudson commented on HIVE-5511:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5511 : percentComplete returned by job status from WebHCat is null (Eugene 
Koifman via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535796)
* /hive/trunk/hcatalog/src/test/e2e/templeton/README.txt
* /hive/trunk/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission_streaming.conf
* /hive/trunk/hcatalog/webhcat/svr/src/main/bin/webhcat_config.sh
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/config/override-container-log4j.properties
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/CompleteDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HiveJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JarJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JobSubmissionConstants.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/LaunchMapper.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/PigJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTrivialExecService.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/mapred/WebHCatJTShim20S.java
* /hive/trunk/shims/src/0.23/java/org/apache/hadoop/mapred/WebHCatJTShim23.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> percentComplete returned by job status from WebHCat is null
> ---
>
> Key: HIVE-5511
> URL: https://issues.apache.org/jira/browse/HIVE-5511
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5511.3.patch, HIVE-5511.5.patch
>
>
> In hadoop1 the logging from MR is sent to stderr.  In H2, by default, to 
> syslog.  templeton.tool.LaunchMapper expects to see the output on stderr to 
> produce 'percentComplete' in job status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5430) Refactor VectorizationContext and handle NOT expression with nulls.

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806289#comment-13806289
 ] 

Hudson commented on HIVE-5430:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5430 : Refactor VectorizationContext and handle NOT expression with nulls. 
(Jitendra Nath Pandey via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535055)
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareScalar.txt
* /hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryFunc.txt
* /hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryMinus.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ScalarArithmeticColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/TestTemplates/TestColumnScalarFilterVectorExpressionEvaluation.txt
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorFilterOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorReduceSinkOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorSelectOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedExpressions.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/AbstractFilterStringColLikeStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColAndCol.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColOrCol.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ConstantVectorExpression.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColAndScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColOrScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprAndExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprOrExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterNotExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarAndColumn.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarOrColumn.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColLikeStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColRegExpStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseDoubleToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseLongToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLongToString.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerDoubleToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerLongToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ex

[jira] [Commented] (HIVE-5552) Merging of QBJoinTrees doesn't handle filter pushdowns correctly

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806294#comment-13806294
 ] 

Hudson commented on HIVE-5552:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5552 : Merging of QBJoinTrees doesnt handle filter pushdowns correctly 
(Harish Butani via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535858)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_merging.q
* /hive/trunk/ql/src/test/results/clientpositive/join_merging.q.out


> Merging of QBJoinTrees doesn't handle filter pushdowns correctly
> 
>
> Key: HIVE-5552
> URL: https://issues.apache.org/jira/browse/HIVE-5552
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-5552.1.patch, HIVE-5552.2.patch
>
>
> The following query fails:
> (this based on the schema from auto_join_filters.q)
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b on a.value = b.value  RIGHT OUTER 
> JOIN myinput1 c 
>  ON 
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Whereas this query succeeds
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b RIGHT OUTER JOIN myinput1 c 
>  ON  a.value = b.value and
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Pushing the first condition to the first join, triggers a merge of the 2 
> QBJoinTrees. During merge all the right side  filters identified for pushing 
> are assumed to refer to the merging table (b in this case). But the pushable 
> filters can refer to any left table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5216) Need to annotate public API in HCatalog

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806286#comment-13806286
 ] 

Hudson commented on HIVE-5216:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5216 : Need to annotate public API in HCatalog (Eugene Koifman via Thejas 
Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535535)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/HCatRecord.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/schema/HCatSchema.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatWriter.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatOutputFormat.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatStorer.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java


> Need to annotate public API in HCatalog
> ---
>
> Key: HIVE-5216
> URL: https://issues.apache.org/jira/browse/HIVE-5216
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5216.2.patch, HIVE-5216.patch
>
>
> need to annotate which API is considered public using something like
> @InterfaceAudience.Public
> @InterfaceStability.Evolving
> Currently this is what is considered (at a minimum) public API
> HCatLoader
> HCatStorer
> HCatInputFormat
> HCatOutputFormat
> HCatReader
> HCatWriter
> HCatRecord
> HCatSchema
> This is needed so that clients/dependent projects know which API they can 
> rely on and which can change w/o notice.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5599) Change default logging level to INFO

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806287#comment-13806287
 ] 

Hudson commented on HIVE-5599:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5599 - Change default logging level to INFO (Brock Noland, Reviewed by 
Thejas M Nair) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535047)
* /hive/trunk/common/src/java/conf/hive-log4j.properties


> Change default logging level to INFO
> 
>
> Key: HIVE-5599
> URL: https://issues.apache.org/jira/browse/HIVE-5599
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5599.patch
>
>
> The default logging level is warn:
> https://github.com/apache/hive/blob/trunk/common/src/java/conf/hive-log4j.properties#L19
> but hive logs lot's of good information at INFO level. Additionally most 
> hadoop projects log at INFO by default. Let's change the logging level to 
> INFO by default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5625) Fix issue with metastore version restriction test.

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806281#comment-13806281
 ] 

Hudson commented on HIVE-5625:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5625 - Fix issue with metastore version restriction test. (Vikram Dixit K 
via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535402)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java


> Fix issue with metastore version restriction test.
> --
>
> Key: HIVE-5625
> URL: https://issues.apache.org/jira/browse/HIVE-5625
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.13.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0
>
> Attachments: HIVE-5625.1.patch
>
>
> Based on Brock's comments, the change made in HIVE-5403 change the nature of 
> the test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5577) Remove TestNegativeCliDriver script_broken_pipe1

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806288#comment-13806288
 ] 

Hudson commented on HIVE-5577:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5577 : Remove TestNegativeCliDriver script_broken_pipe1 (Brock Noland via 
Navis) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535643)
* /hive/trunk/ql/src/test/queries/clientnegative/script_broken_pipe1.q
* /hive/trunk/ql/src/test/results/clientnegative/script_broken_pipe1.q.out


> Remove TestNegativeCliDriver script_broken_pipe1
> 
>
> Key: HIVE-5577
> URL: https://issues.apache.org/jira/browse/HIVE-5577
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5577.1.patch.txt
>
>
> TestNegativeCliDriver script_broken_pipe1 is extremely flaky and not a 
> terribly important test. Let's remove it.
> Failures
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/junit/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/204/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5482) JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be consistent with other modules

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806283#comment-13806283
 ] 

Hudson commented on HIVE-5482:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5482 : JDBC should depend on httpclient.version and httpcore.version 4.1.3 
to be consistent with other modules (Vaibhav Gumashta via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535620)
* /hive/trunk/ivy/libraries.properties


> JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be 
> consistent with other modules
> ---
>
> Key: HIVE-5482
> URL: https://issues.apache.org/jira/browse/HIVE-5482
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5482.1.patch
>
>
> Currently depends on 4.2.4 and 4.2.5 which conflicts with thrift-0.9 which 
> depends on 4.1.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806290#comment-13806290
 ] 

Hudson commented on HIVE-5403:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5403: Perflogger broken due to HIVE-5403 (Vikram Dixit K via Gunther 
Hagleitner) (gunther: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535598)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java
HIVE-5403 : Move loading of filesystem, ugi, metastore client to hive session 
(Vikram Dixit via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535039)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java


> Move loading of filesystem, ugi, metastore client to hive session
> -
>
> Key: HIVE-5403
> URL: https://issues.apache.org/jira/browse/HIVE-5403
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0
>
> Attachments: HIVE-5403.1.patch, HIVE-5403.2.patch, HIVE-5403.3.patch, 
> HIVE-5403.4.patch
>
>
> As part of HIVE-5184, the metastore connection, loading filesystem were done 
> as part of the tez session so as to speed up query times while paying a cost 
> at startup. We can do this more generally in hive to apply to both the 
> mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5454) HCatalog runs a partition listing with an empty filter

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806291#comment-13806291
 ] 

Hudson commented on HIVE-5454:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5454 - HCatalog runs a partition listing with an empty filter (Harsh J via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535051)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/impl/HCatInputFormatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatInputFormat.java
* 
/hive/trunk/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/HCatMapReduceTest.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* /hive/trunk/hcatalog/src/docs/src/documentation/content/xdocs/inputoutput.xml
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hcatalog/utils/HBaseReadWrite.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/GroupByAge.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadJson.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadRC.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadText.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadWrite.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/SimpleRead.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreComplex.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreDemo.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreNumbers.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/SumNumbers.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/TypeDataCheck.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteJson.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteRC.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteText.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteTextPartitioned.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHBaseInputFormat.java


> HCatalog runs a partition listing with an empty filter
> --
>
> Key: HIVE-5454
> URL: https://issues.apache.org/jira/browse/HIVE-5454
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Harsh J
>Assignee: Harsh J
> Fix For: 0.13.0
>
> Attachments: D13317.1.patch, D13317.2.patch, D13317.3.patch
>
>
> This is a HCATALOG-527 caused regression, wherein the HCatLoader's way of 
> calling HCatInputFormat causes it to do 2x partition lookups - once without 
> the filter, and then again with the filter.
> For tables with large number partitions (10, say), the non-filter lookup 
> proves fatal both to the client ("Read timed out" errors from 
> ThriftMetaStoreClient cause the server doesn't respond) and to the server 
> (too much data loaded into the cache, OOME, or slowdown).
> The fix would be to use a single call that also passes a partition filter 
> information, as was in the case of HCatalog 0.4 sources before HCATALOG-527.
> (HCatalog-release-wise, this affects all 0.5.x users)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5350) Cleanup exception handling around parallel orderby

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806282#comment-13806282
 ] 

Hudson commented on HIVE-5350:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5350 - Cleanup exception handling around parallel orderby (Navis via Brock 
Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535045)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/PartitionKeySampler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java


> Cleanup exception handling around parallel orderby
> --
>
> Key: HIVE-5350
> URL: https://issues.apache.org/jira/browse/HIVE-5350
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Navis
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: D13617.1.patch
>
>
> I think we should log the message to the console and the full exception to 
> the log:
> ExecDriver:
> {noformat}
> try {
>   handleSampling(driverContext, mWork, job, conf);
>   job.setPartitionerClass(HiveTotalOrderPartitioner.class);
> } catch (Exception e) {
>   console.printInfo("Not enough sampling data.. Rolling back to 
> single reducer task");
>   rWork.setNumReduceTasks(1);
>   job.setNumReduceTasks(1);
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-784) Support uncorrelated subqueries in the WHERE clause

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806284#comment-13806284
 ] 

Hudson commented on HIVE-784:
-

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-784 : Support uncorrelated subqueries in the WHERE clause (Harish Butani 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535040)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_exists_implicit_gby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_in_groupby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_in_select.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/subquery_multiple_cols_in_select.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/subquery_notexists_implicit_gby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_subquery_chain.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_windowing_corr.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_with_or_cond.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_exists.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_in.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_multiinsert.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_notexists.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_notin.q
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_in_groupby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_in_select.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_subquery_chain.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_windowing_corr.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_with_or_cond.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_exists.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_in.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_multiinsert.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_notexists.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_notin.q.out


> Support uncorrelated subqueries in the WHERE clause
> ---
>
> Key: HIVE-784
> URL: https://issues.apache.org/jira/browse/HIVE-784
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Ning Zhang
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: D13443.1.patch, D13443.2.patch, HIVE-784.1.patch.txt, 
> HIVE-784.2.patch, SubQuerySpec.pdf, tpchQueriesUsingSubQueryClauses.sql
>
>
> Hive currently only support views in the FROM-clause, some Facebook use cases 
> suggest that Hive should support subqueries such as those connected by 
> IN/EXISTS in the WHERE-clause. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5560) Hive produces incorrect results on multi-distinct query

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806295#comment-13806295
 ] 

Hudson commented on HIVE-5560:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5560 : Hive produces incorrect results on multi-distinct query (Navis via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535059)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q
* 
/hive/trunk/ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/count.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby2_map_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby2_noskew_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby3_map_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby3_noskew_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_distinct_samekey.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_map_ppr_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_multi_single_reducer.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_ppr_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_sort_11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/limit_pushdown.q.out


> Hive produces incorrect results on multi-distinct query
> ---
>
> Key: HIVE-5560
> URL: https://issues.apache.org/jira/browse/HIVE-5560
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Vikram Dixit K
>Assignee: Navis
> Fix For: 0.13.0
>
> Attachments: D13599.1.patch, D13599.2.patch
>
>
> {noformat}
> select key, count(distinct key) + count(distinct value) from src tablesample 
> (10 ROWS) group by key
> POSTHOOK: type: QUERY
> POSTHOOK: Input: default@src
>  A masked pattern was here 
> 165 1
> val_165 1
> 238 1
> val_238 1
> 255 1
> val_255 1
> 27  1
> val_27  1
> 278 1
> val_278 1
> 311 1
> val_311 1
> 409 1
> val_409 1
> 484 1
> val_484 1
> 86  1
> val_86  1
> 98  1
> val_98  1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5440) HiveServer2 doesn't apply SQL operation's config property

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806292#comment-13806292
 ] 

Hudson commented on HIVE-5440:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5440: HiveServer2 doesn't apply SQL operation's config property (Prasad 
Mujumdar via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535889)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/ExecuteStatementOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java
* /hive/trunk/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java


> HiveServer2 doesn't apply SQL operation's config property 
> --
>
> Key: HIVE-5440
> URL: https://issues.apache.org/jira/browse/HIVE-5440
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Fix For: 0.13.0
>
> Attachments: HIVE-5440.1.patch, HIVE-5440.2.patch
>
>
> The HiveServer2 thrift IDL includes an optional config overlay map which is 
> currently not used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5220) Add option for removing intermediate directory for partition, which is empty

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806285#comment-13806285
 ] 

Hudson commented on HIVE-5220:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5220 : Add option for removing intermediate directory for partition, which 
is empty (Navis via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535072)
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java


> Add option for removing intermediate directory for partition, which is empty
> 
>
> Key: HIVE-5220
> URL: https://issues.apache.org/jira/browse/HIVE-5220
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: D12729.2.patch, HIVE-5220.D12729.1.patch
>
>
> For deeply nested partitioned table, intermediate directories are not removed 
> even if there is no partitions in it by removing them.
> {noformat}
> /deep_part/c=09/d=01
> /deep_part/c=09/d=01/e=01
> /deep_part/c=09/d=01/e=02
> /deep_part/c=09/d=02
> /deep_part/c=09/d=02/e=01
> /deep_part/c=09/d=02/e=02
> {noformat}
> After removing partition (c='09'), directory remains like this, 
> {noformat}
> /deep_part/c=09/d=01
> /deep_part/c=09/d=02
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5637) Sporadic minimr test failure

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806293#comment-13806293
 ] 

Hudson commented on HIVE-5637:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5637 : Sporadic minimr test failure (Navis via Ashutosh Chauhan) 
(hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535510)
* /hive/trunk/ql/src/test/queries/clientpositive/import_exported_table.q


> Sporadic minimr test failure
> 
>
> Key: HIVE-5637
> URL: https://issues.apache.org/jira/browse/HIVE-5637
> Project: Hive
>  Issue Type: Test
>  Components: Tests
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5637.1.patch.txt
>
>
> {noformat}
> ant test -Dtestcase=TestMinimrCliDriver 
> -Dqfile=import_exported_table.q,load_hdfs_file_with_space_in_the_name.q 
> -Dmodule=ql
> {noformat}
> Fails with message like this.
> {noformat}
> Begin query: load_hdfs_file_with_space_in_the_name.q
> mkdir: cannot create directory hdfs:///tmp/test/: File exists
> Exception: Client Execution failed with error code = -1
> See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
> more logs.
> junit.framework.AssertionFailedError: Client Execution failed with error code 
> = -1
> See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
> more logs.
> at junit.framework.Assert.fail(Assert.java:47)
> at 
> org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:349)
> at 
> org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_load_hdfs_file_with_space_in_the_name(TestMinimrCliDriver.java:291)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:154)
> at junit.framework.TestCase.runBare(TestCase.java:127)
> at junit.framework.TestResult$1.protect(TestResult.java:106)
> at junit.framework.TestResult.runProtected(TestResult.java:124)
> at junit.framework.TestResult.run(TestResult.java:109)
> at junit.framework.TestCase.run(TestCase.java:118)
> at junit.framework.TestSuite.runTest(TestSuite.java:208)
> at junit.framework.TestSuite.run(TestSuite.java:203)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5629) Fix two javadoc failures in HCatalog

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806279#comment-13806279
 ] 

Hudson commented on HIVE-5629:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5629 : Fix two javadoc failures in HCatalog (Brock Noland via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535513)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/InputJobInfo.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java


> Fix two javadoc failures in HCatalog
> 
>
> Key: HIVE-5629
> URL: https://issues.apache.org/jira/browse/HIVE-5629
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5629.patch
>
>
> I am seeing two javadoc failures on HCatalog. These are not being seen by 
> PTest and indeed I cannot reproduce on my Mac but can on Linux. Regardless 
> they should be fixed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5605) AddResourceOperation, DeleteResourceOperation, DfsOperation, SetOperation should be removed from org.apache.hive.service.cli.operation

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806280#comment-13806280
 ] 

Hudson commented on HIVE-5605:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5605 - AddResourceOperation, DeleteResourceOperation, DfsOperation, 
SetOperation should be removed from org.apache.hive.service.cli.operation 
(Vaibhav Gumashta via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535043)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/AddResourceOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/DeleteResourceOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/DfsOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SetOperation.java


> AddResourceOperation, DeleteResourceOperation, DfsOperation, SetOperation 
> should be removed from org.apache.hive.service.cli.operation 
> ---
>
> Key: HIVE-5605
> URL: https://issues.apache.org/jira/browse/HIVE-5605
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 0.13.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-5605.1.patch
>
>
> These classes are not used as the processing for Add, Delete, DFS and Set 
> commands is done by HiveCommandOperation



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5628) ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with Test not end with it

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806277#comment-13806277
 ] 

Hudson commented on HIVE-5628:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5628 : ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should 
start with Test not end with it (Brock Noland via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535761)
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/DynamicMultiDimeCollectionTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPrunnerTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestDynamicMultiDimeCollection.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestListBucketingPrunner.java


> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with 
> Test not end with it
> --
>
> Key: HIVE-5628
> URL: https://issues.apache.org/jira/browse/HIVE-5628
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5628.patch
>
>
> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest will not be run 
> by PTest because they end with Test and PTest requires tests start with Test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5560) Hive produces incorrect results on multi-distinct query

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806268#comment-13806268
 ] 

Hudson commented on HIVE-5560:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5560 : Hive produces incorrect results on multi-distinct query (Navis via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535059)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q
* 
/hive/trunk/ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/count.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby2_map_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby2_noskew_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby3_map_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby3_noskew_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_distinct_samekey.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_map_ppr_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_multi_single_reducer.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_ppr_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_sort_11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/limit_pushdown.q.out


> Hive produces incorrect results on multi-distinct query
> ---
>
> Key: HIVE-5560
> URL: https://issues.apache.org/jira/browse/HIVE-5560
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Vikram Dixit K
>Assignee: Navis
> Fix For: 0.13.0
>
> Attachments: D13599.1.patch, D13599.2.patch
>
>
> {noformat}
> select key, count(distinct key) + count(distinct value) from src tablesample 
> (10 ROWS) group by key
> POSTHOOK: type: QUERY
> POSTHOOK: Input: default@src
>  A masked pattern was here 
> 165 1
> val_165 1
> 238 1
> val_238 1
> 255 1
> val_255 1
> 27  1
> val_27  1
> 278 1
> val_278 1
> 311 1
> val_311 1
> 409 1
> val_409 1
> 484 1
> val_484 1
> 86  1
> val_86  1
> 98  1
> val_98  1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5440) HiveServer2 doesn't apply SQL operation's config property

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806265#comment-13806265
 ] 

Hudson commented on HIVE-5440:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5440: HiveServer2 doesn't apply SQL operation's config property (Prasad 
Mujumdar via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535889)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/ExecuteStatementOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java
* /hive/trunk/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java


> HiveServer2 doesn't apply SQL operation's config property 
> --
>
> Key: HIVE-5440
> URL: https://issues.apache.org/jira/browse/HIVE-5440
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Fix For: 0.13.0
>
> Attachments: HIVE-5440.1.patch, HIVE-5440.2.patch
>
>
> The HiveServer2 thrift IDL includes an optional config overlay map which is 
> currently not used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5552) Merging of QBJoinTrees doesn't handle filter pushdowns correctly

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806267#comment-13806267
 ] 

Hudson commented on HIVE-5552:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5552 : Merging of QBJoinTrees doesnt handle filter pushdowns correctly 
(Harish Butani via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535858)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_merging.q
* /hive/trunk/ql/src/test/results/clientpositive/join_merging.q.out


> Merging of QBJoinTrees doesn't handle filter pushdowns correctly
> 
>
> Key: HIVE-5552
> URL: https://issues.apache.org/jira/browse/HIVE-5552
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-5552.1.patch, HIVE-5552.2.patch
>
>
> The following query fails:
> (this based on the schema from auto_join_filters.q)
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b on a.value = b.value  RIGHT OUTER 
> JOIN myinput1 c 
>  ON 
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Whereas this query succeeds
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b RIGHT OUTER JOIN myinput1 c 
>  ON  a.value = b.value and
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Pushing the first condition to the first join, triggers a merge of the 2 
> QBJoinTrees. During merge all the right side  filters identified for pushing 
> are assumed to refer to the merging table (b in this case). But the pushable 
> filters can refer to any left table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5350) Cleanup exception handling around parallel orderby

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806255#comment-13806255
 ] 

Hudson commented on HIVE-5350:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5350 - Cleanup exception handling around parallel orderby (Navis via Brock 
Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535045)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/PartitionKeySampler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java


> Cleanup exception handling around parallel orderby
> --
>
> Key: HIVE-5350
> URL: https://issues.apache.org/jira/browse/HIVE-5350
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Navis
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: D13617.1.patch
>
>
> I think we should log the message to the console and the full exception to 
> the log:
> ExecDriver:
> {noformat}
> try {
>   handleSampling(driverContext, mWork, job, conf);
>   job.setPartitionerClass(HiveTotalOrderPartitioner.class);
> } catch (Exception e) {
>   console.printInfo("Not enough sampling data.. Rolling back to 
> single reducer task");
>   rWork.setNumReduceTasks(1);
>   job.setNumReduceTasks(1);
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5454) HCatalog runs a partition listing with an empty filter

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806264#comment-13806264
 ] 

Hudson commented on HIVE-5454:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5454 - HCatalog runs a partition listing with an empty filter (Harsh J via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535051)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/impl/HCatInputFormatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatInputFormat.java
* 
/hive/trunk/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/HCatMapReduceTest.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* /hive/trunk/hcatalog/src/docs/src/documentation/content/xdocs/inputoutput.xml
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hcatalog/utils/HBaseReadWrite.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/GroupByAge.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadJson.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadRC.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadText.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadWrite.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/SimpleRead.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreComplex.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreDemo.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreNumbers.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/SumNumbers.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/TypeDataCheck.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteJson.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteRC.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteText.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteTextPartitioned.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHBaseInputFormat.java


> HCatalog runs a partition listing with an empty filter
> --
>
> Key: HIVE-5454
> URL: https://issues.apache.org/jira/browse/HIVE-5454
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Harsh J
>Assignee: Harsh J
> Fix For: 0.13.0
>
> Attachments: D13317.1.patch, D13317.2.patch, D13317.3.patch
>
>
> This is a HCATALOG-527 caused regression, wherein the HCatLoader's way of 
> calling HCatInputFormat causes it to do 2x partition lookups - once without 
> the filter, and then again with the filter.
> For tables with large number partitions (10, say), the non-filter lookup 
> proves fatal both to the client ("Read timed out" errors from 
> ThriftMetaStoreClient cause the server doesn't respond) and to the server 
> (too much data loaded into the cache, OOME, or slowdown).
> The fix would be to use a single call that also passes a partition filter 
> information, as was in the case of HCatalog 0.4 sources before HCATALOG-527.
> (HCatalog-release-wise, this affects all 0.5.x users)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5430) Refactor VectorizationContext and handle NOT expression with nulls.

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806262#comment-13806262
 ] 

Hudson commented on HIVE-5430:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5430 : Refactor VectorizationContext and handle NOT expression with nulls. 
(Jitendra Nath Pandey via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535055)
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareScalar.txt
* /hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryFunc.txt
* /hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryMinus.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ScalarArithmeticColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/TestTemplates/TestColumnScalarFilterVectorExpressionEvaluation.txt
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorFilterOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorReduceSinkOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorSelectOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedExpressions.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/AbstractFilterStringColLikeStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColAndCol.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColOrCol.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ConstantVectorExpression.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColAndScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColOrScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprAndExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprOrExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterNotExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarAndColumn.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarOrColumn.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColLikeStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColRegExpStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseDoubleToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseLongToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLongToString.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerDoubleToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerLongToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ex

[jira] [Commented] (HIVE-5220) Add option for removing intermediate directory for partition, which is empty

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806258#comment-13806258
 ] 

Hudson commented on HIVE-5220:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5220 : Add option for removing intermediate directory for partition, which 
is empty (Navis via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535072)
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java


> Add option for removing intermediate directory for partition, which is empty
> 
>
> Key: HIVE-5220
> URL: https://issues.apache.org/jira/browse/HIVE-5220
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: D12729.2.patch, HIVE-5220.D12729.1.patch
>
>
> For deeply nested partitioned table, intermediate directories are not removed 
> even if there is no partitions in it by removing them.
> {noformat}
> /deep_part/c=09/d=01
> /deep_part/c=09/d=01/e=01
> /deep_part/c=09/d=01/e=02
> /deep_part/c=09/d=02
> /deep_part/c=09/d=02/e=01
> /deep_part/c=09/d=02/e=02
> {noformat}
> After removing partition (c='09'), directory remains like this, 
> {noformat}
> /deep_part/c=09/d=01
> /deep_part/c=09/d=02
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5216) Need to annotate public API in HCatalog

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806259#comment-13806259
 ] 

Hudson commented on HIVE-5216:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5216 : Need to annotate public API in HCatalog (Eugene Koifman via Thejas 
Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535535)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/HCatRecord.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/schema/HCatSchema.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatWriter.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatOutputFormat.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatStorer.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java


> Need to annotate public API in HCatalog
> ---
>
> Key: HIVE-5216
> URL: https://issues.apache.org/jira/browse/HIVE-5216
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5216.2.patch, HIVE-5216.patch
>
>
> need to annotate which API is considered public using something like
> @InterfaceAudience.Public
> @InterfaceStability.Evolving
> Currently this is what is considered (at a minimum) public API
> HCatLoader
> HCatStorer
> HCatInputFormat
> HCatOutputFormat
> HCatReader
> HCatWriter
> HCatRecord
> HCatSchema
> This is needed so that clients/dependent projects know which API they can 
> rely on and which can change w/o notice.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5605) AddResourceOperation, DeleteResourceOperation, DfsOperation, SetOperation should be removed from org.apache.hive.service.cli.operation

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806253#comment-13806253
 ] 

Hudson commented on HIVE-5605:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5605 - AddResourceOperation, DeleteResourceOperation, DfsOperation, 
SetOperation should be removed from org.apache.hive.service.cli.operation 
(Vaibhav Gumashta via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535043)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/AddResourceOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/DeleteResourceOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/DfsOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SetOperation.java


> AddResourceOperation, DeleteResourceOperation, DfsOperation, SetOperation 
> should be removed from org.apache.hive.service.cli.operation 
> ---
>
> Key: HIVE-5605
> URL: https://issues.apache.org/jira/browse/HIVE-5605
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 0.13.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-5605.1.patch
>
>
> These classes are not used as the processing for Add, Delete, DFS and Set 
> commands is done by HiveCommandOperation



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5577) Remove TestNegativeCliDriver script_broken_pipe1

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806261#comment-13806261
 ] 

Hudson commented on HIVE-5577:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5577 : Remove TestNegativeCliDriver script_broken_pipe1 (Brock Noland via 
Navis) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535643)
* /hive/trunk/ql/src/test/queries/clientnegative/script_broken_pipe1.q
* /hive/trunk/ql/src/test/results/clientnegative/script_broken_pipe1.q.out


> Remove TestNegativeCliDriver script_broken_pipe1
> 
>
> Key: HIVE-5577
> URL: https://issues.apache.org/jira/browse/HIVE-5577
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5577.1.patch.txt
>
>
> TestNegativeCliDriver script_broken_pipe1 is extremely flaky and not a 
> terribly important test. Let's remove it.
> Failures
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/junit/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/204/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5599) Change default logging level to INFO

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806260#comment-13806260
 ] 

Hudson commented on HIVE-5599:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5599 - Change default logging level to INFO (Brock Noland, Reviewed by 
Thejas M Nair) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535047)
* /hive/trunk/common/src/java/conf/hive-log4j.properties


> Change default logging level to INFO
> 
>
> Key: HIVE-5599
> URL: https://issues.apache.org/jira/browse/HIVE-5599
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5599.patch
>
>
> The default logging level is warn:
> https://github.com/apache/hive/blob/trunk/common/src/java/conf/hive-log4j.properties#L19
> but hive logs lot's of good information at INFO level. Additionally most 
> hadoop projects log at INFO by default. Let's change the logging level to 
> INFO by default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5619) Allow concat() to accept mixed string/binary args

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806251#comment-13806251
 ] 

Hudson commented on HIVE-5619:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5619 : Allow concat() to accept mixed string/binary args (Jason Dere via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535621)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_concat.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_concat.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java


> Allow concat() to accept mixed string/binary args
> -
>
> Key: HIVE-5619
> URL: https://issues.apache.org/jira/browse/HIVE-5619
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5619.1.patch
>
>
> concat() is currently strict about allowing either all binary or all 
> non-binary arguments. Loosen this to permit mixed params.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-784) Support uncorrelated subqueries in the WHERE clause

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806257#comment-13806257
 ] 

Hudson commented on HIVE-784:
-

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-784 : Support uncorrelated subqueries in the WHERE clause (Harish Butani 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535040)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_exists_implicit_gby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_in_groupby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_in_select.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/subquery_multiple_cols_in_select.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/subquery_notexists_implicit_gby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_subquery_chain.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_windowing_corr.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_with_or_cond.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_exists.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_in.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_multiinsert.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_notexists.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_notin.q
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_in_groupby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_in_select.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_subquery_chain.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_windowing_corr.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_with_or_cond.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_exists.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_in.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_multiinsert.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_notexists.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_notin.q.out


> Support uncorrelated subqueries in the WHERE clause
> ---
>
> Key: HIVE-784
> URL: https://issues.apache.org/jira/browse/HIVE-784
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Ning Zhang
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: D13443.1.patch, D13443.2.patch, HIVE-784.1.patch.txt, 
> HIVE-784.2.patch, SubQuerySpec.pdf, tpchQueriesUsingSubQueryClauses.sql
>
>
> Hive currently only support views in the FROM-clause, some Facebook use cases 
> suggest that Hive should support subqueries such as those connected by 
> IN/EXISTS in the WHERE-clause. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5628) ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with Test not end with it

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806250#comment-13806250
 ] 

Hudson commented on HIVE-5628:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5628 : ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should 
start with Test not end with it (Brock Noland via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535761)
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/DynamicMultiDimeCollectionTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPrunnerTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestDynamicMultiDimeCollection.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestListBucketingPrunner.java


> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with 
> Test not end with it
> --
>
> Key: HIVE-5628
> URL: https://issues.apache.org/jira/browse/HIVE-5628
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5628.patch
>
>
> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest will not be run 
> by PTest because they end with Test and PTest requires tests start with Test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5629) Fix two javadoc failures in HCatalog

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806252#comment-13806252
 ] 

Hudson commented on HIVE-5629:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5629 : Fix two javadoc failures in HCatalog (Brock Noland via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535513)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/InputJobInfo.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java


> Fix two javadoc failures in HCatalog
> 
>
> Key: HIVE-5629
> URL: https://issues.apache.org/jira/browse/HIVE-5629
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5629.patch
>
>
> I am seeing two javadoc failures on HCatalog. These are not being seen by 
> PTest and indeed I cannot reproduce on my Mac but can on Linux. Regardless 
> they should be fixed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806263#comment-13806263
 ] 

Hudson commented on HIVE-5403:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5403: Perflogger broken due to HIVE-5403 (Vikram Dixit K via Gunther 
Hagleitner) (gunther: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535598)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java
HIVE-5403 : Move loading of filesystem, ugi, metastore client to hive session 
(Vikram Dixit via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535039)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java


> Move loading of filesystem, ugi, metastore client to hive session
> -
>
> Key: HIVE-5403
> URL: https://issues.apache.org/jira/browse/HIVE-5403
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0
>
> Attachments: HIVE-5403.1.patch, HIVE-5403.2.patch, HIVE-5403.3.patch, 
> HIVE-5403.4.patch
>
>
> As part of HIVE-5184, the metastore connection, loading filesystem were done 
> as part of the tez session so as to speed up query times while paying a cost 
> at startup. We can do this more generally in hive to apply to both the 
> mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5482) JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be consistent with other modules

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806256#comment-13806256
 ] 

Hudson commented on HIVE-5482:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5482 : JDBC should depend on httpclient.version and httpcore.version 4.1.3 
to be consistent with other modules (Vaibhav Gumashta via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535620)
* /hive/trunk/ivy/libraries.properties


> JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be 
> consistent with other modules
> ---
>
> Key: HIVE-5482
> URL: https://issues.apache.org/jira/browse/HIVE-5482
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5482.1.patch
>
>
> Currently depends on 4.2.4 and 4.2.5 which conflicts with thrift-0.9 which 
> depends on 4.1.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5625) Fix issue with metastore version restriction test.

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806254#comment-13806254
 ] 

Hudson commented on HIVE-5625:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5625 - Fix issue with metastore version restriction test. (Vikram Dixit K 
via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535402)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java


> Fix issue with metastore version restriction test.
> --
>
> Key: HIVE-5625
> URL: https://issues.apache.org/jira/browse/HIVE-5625
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.13.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0
>
> Attachments: HIVE-5625.1.patch
>
>
> Based on Brock's comments, the change made in HIVE-5403 change the nature of 
> the test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5637) Sporadic minimr test failure

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806266#comment-13806266
 ] 

Hudson commented on HIVE-5637:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5637 : Sporadic minimr test failure (Navis via Ashutosh Chauhan) 
(hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535510)
* /hive/trunk/ql/src/test/queries/clientpositive/import_exported_table.q


> Sporadic minimr test failure
> 
>
> Key: HIVE-5637
> URL: https://issues.apache.org/jira/browse/HIVE-5637
> Project: Hive
>  Issue Type: Test
>  Components: Tests
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5637.1.patch.txt
>
>
> {noformat}
> ant test -Dtestcase=TestMinimrCliDriver 
> -Dqfile=import_exported_table.q,load_hdfs_file_with_space_in_the_name.q 
> -Dmodule=ql
> {noformat}
> Fails with message like this.
> {noformat}
> Begin query: load_hdfs_file_with_space_in_the_name.q
> mkdir: cannot create directory hdfs:///tmp/test/: File exists
> Exception: Client Execution failed with error code = -1
> See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
> more logs.
> junit.framework.AssertionFailedError: Client Execution failed with error code 
> = -1
> See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
> more logs.
> at junit.framework.Assert.fail(Assert.java:47)
> at 
> org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:349)
> at 
> org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_load_hdfs_file_with_space_in_the_name(TestMinimrCliDriver.java:291)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:154)
> at junit.framework.TestCase.runBare(TestCase.java:127)
> at junit.framework.TestResult$1.protect(TestResult.java:106)
> at junit.framework.TestResult.runProtected(TestResult.java:124)
> at junit.framework.TestResult.run(TestResult.java:109)
> at junit.framework.TestCase.run(TestCase.java:118)
> at junit.framework.TestSuite.runTest(TestSuite.java:208)
> at junit.framework.TestSuite.run(TestSuite.java:203)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5511) percentComplete returned by job status from WebHCat is null

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806249#comment-13806249
 ] 

Hudson commented on HIVE-5511:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5511 : percentComplete returned by job status from WebHCat is null (Eugene 
Koifman via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535796)
* /hive/trunk/hcatalog/src/test/e2e/templeton/README.txt
* /hive/trunk/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission_streaming.conf
* /hive/trunk/hcatalog/webhcat/svr/src/main/bin/webhcat_config.sh
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/config/override-container-log4j.properties
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/CompleteDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HiveJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JarJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JobSubmissionConstants.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/LaunchMapper.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/PigJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTrivialExecService.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/mapred/WebHCatJTShim20S.java
* /hive/trunk/shims/src/0.23/java/org/apache/hadoop/mapred/WebHCatJTShim23.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> percentComplete returned by job status from WebHCat is null
> ---
>
> Key: HIVE-5511
> URL: https://issues.apache.org/jira/browse/HIVE-5511
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5511.3.patch, HIVE-5511.5.patch
>
>
> In hadoop1 the logging from MR is sent to stderr.  In H2, by default, to 
> syslog.  templeton.tool.LaunchMapper expects to see the output on stderr to 
> produce 'percentComplete' in job status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5628) ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with Test not end with it

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806126#comment-13806126
 ] 

Hudson commented on HIVE-5628:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #522 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/522/])
HIVE-5628 : ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should 
start with Test not end with it (Brock Noland via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535761)
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/DynamicMultiDimeCollectionTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPrunnerTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestDynamicMultiDimeCollection.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestListBucketingPrunner.java


> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with 
> Test not end with it
> --
>
> Key: HIVE-5628
> URL: https://issues.apache.org/jira/browse/HIVE-5628
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5628.patch
>
>
> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest will not be run 
> by PTest because they end with Test and PTest requires tests start with Test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5552) Merging of QBJoinTrees doesn't handle filter pushdowns correctly

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806129#comment-13806129
 ] 

Hudson commented on HIVE-5552:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #522 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/522/])
HIVE-5552 : Merging of QBJoinTrees doesnt handle filter pushdowns correctly 
(Harish Butani via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535858)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_merging.q
* /hive/trunk/ql/src/test/results/clientpositive/join_merging.q.out


> Merging of QBJoinTrees doesn't handle filter pushdowns correctly
> 
>
> Key: HIVE-5552
> URL: https://issues.apache.org/jira/browse/HIVE-5552
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-5552.1.patch, HIVE-5552.2.patch
>
>
> The following query fails:
> (this based on the schema from auto_join_filters.q)
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b on a.value = b.value  RIGHT OUTER 
> JOIN myinput1 c 
>  ON 
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Whereas this query succeeds
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b RIGHT OUTER JOIN myinput1 c 
>  ON  a.value = b.value and
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Pushing the first condition to the first join, triggers a merge of the 2 
> QBJoinTrees. During merge all the right side  filters identified for pushing 
> are assumed to refer to the merging table (b in this case). But the pushable 
> filters can refer to any left table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5511) percentComplete returned by job status from WebHCat is null

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806127#comment-13806127
 ] 

Hudson commented on HIVE-5511:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #522 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/522/])
HIVE-5511 : percentComplete returned by job status from WebHCat is null (Eugene 
Koifman via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535796)
* /hive/trunk/hcatalog/src/test/e2e/templeton/README.txt
* /hive/trunk/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission_streaming.conf
* /hive/trunk/hcatalog/webhcat/svr/src/main/bin/webhcat_config.sh
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/config/override-container-log4j.properties
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/CompleteDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HiveJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JarJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JobSubmissionConstants.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/LaunchMapper.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/PigJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTrivialExecService.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/mapred/WebHCatJTShim20S.java
* /hive/trunk/shims/src/0.23/java/org/apache/hadoop/mapred/WebHCatJTShim23.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> percentComplete returned by job status from WebHCat is null
> ---
>
> Key: HIVE-5511
> URL: https://issues.apache.org/jira/browse/HIVE-5511
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5511.3.patch, HIVE-5511.5.patch
>
>
> In hadoop1 the logging from MR is sent to stderr.  In H2, by default, to 
> syslog.  templeton.tool.LaunchMapper expects to see the output on stderr to 
> produce 'percentComplete' in job status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5440) HiveServer2 doesn't apply SQL operation's config property

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806128#comment-13806128
 ] 

Hudson commented on HIVE-5440:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #522 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/522/])
HIVE-5440: HiveServer2 doesn't apply SQL operation's config property (Prasad 
Mujumdar via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535889)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/ExecuteStatementOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java
* /hive/trunk/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java


> HiveServer2 doesn't apply SQL operation's config property 
> --
>
> Key: HIVE-5440
> URL: https://issues.apache.org/jira/browse/HIVE-5440
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Fix For: 0.13.0
>
> Attachments: HIVE-5440.1.patch, HIVE-5440.2.patch
>
>
> The HiveServer2 thrift IDL includes an optional config overlay map which is 
> currently not used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5511) percentComplete returned by job status from WebHCat is null

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806058#comment-13806058
 ] 

Hudson commented on HIVE-5511:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2422 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2422/])
HIVE-5511 : percentComplete returned by job status from WebHCat is null (Eugene 
Koifman via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535796)
* /hive/trunk/hcatalog/src/test/e2e/templeton/README.txt
* /hive/trunk/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission_streaming.conf
* /hive/trunk/hcatalog/webhcat/svr/src/main/bin/webhcat_config.sh
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/config/override-container-log4j.properties
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/CompleteDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HiveJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JarJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JobSubmissionConstants.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/LaunchMapper.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/PigJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTrivialExecService.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/mapred/WebHCatJTShim20S.java
* /hive/trunk/shims/src/0.23/java/org/apache/hadoop/mapred/WebHCatJTShim23.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> percentComplete returned by job status from WebHCat is null
> ---
>
> Key: HIVE-5511
> URL: https://issues.apache.org/jira/browse/HIVE-5511
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5511.3.patch, HIVE-5511.5.patch
>
>
> In hadoop1 the logging from MR is sent to stderr.  In H2, by default, to 
> syslog.  templeton.tool.LaunchMapper expects to see the output on stderr to 
> produce 'percentComplete' in job status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5552) Merging of QBJoinTrees doesn't handle filter pushdowns correctly

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806060#comment-13806060
 ] 

Hudson commented on HIVE-5552:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2422 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2422/])
HIVE-5552 : Merging of QBJoinTrees doesnt handle filter pushdowns correctly 
(Harish Butani via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535858)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_merging.q
* /hive/trunk/ql/src/test/results/clientpositive/join_merging.q.out


> Merging of QBJoinTrees doesn't handle filter pushdowns correctly
> 
>
> Key: HIVE-5552
> URL: https://issues.apache.org/jira/browse/HIVE-5552
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-5552.1.patch, HIVE-5552.2.patch
>
>
> The following query fails:
> (this based on the schema from auto_join_filters.q)
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b on a.value = b.value  RIGHT OUTER 
> JOIN myinput1 c 
>  ON 
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Whereas this query succeeds
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b RIGHT OUTER JOIN myinput1 c 
>  ON  a.value = b.value and
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Pushing the first condition to the first join, triggers a merge of the 2 
> QBJoinTrees. During merge all the right side  filters identified for pushing 
> are assumed to refer to the merging table (b in this case). But the pushable 
> filters can refer to any left table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5440) HiveServer2 doesn't apply SQL operation's config property

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806059#comment-13806059
 ] 

Hudson commented on HIVE-5440:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2422 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2422/])
HIVE-5440: HiveServer2 doesn't apply SQL operation's config property (Prasad 
Mujumdar via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535889)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/ExecuteStatementOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java
* /hive/trunk/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java


> HiveServer2 doesn't apply SQL operation's config property 
> --
>
> Key: HIVE-5440
> URL: https://issues.apache.org/jira/browse/HIVE-5440
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Fix For: 0.13.0
>
> Attachments: HIVE-5440.1.patch, HIVE-5440.2.patch
>
>
> The HiveServer2 thrift IDL includes an optional config overlay map which is 
> currently not used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5629) Fix two javadoc failures in HCatalog

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805819#comment-13805819
 ] 

Hudson commented on HIVE-5629:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #521 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/521/])
HIVE-5629 : Fix two javadoc failures in HCatalog (Brock Noland via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535513)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/InputJobInfo.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java


> Fix two javadoc failures in HCatalog
> 
>
> Key: HIVE-5629
> URL: https://issues.apache.org/jira/browse/HIVE-5629
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5629.patch
>
>
> I am seeing two javadoc failures on HCatalog. These are not being seen by 
> PTest and indeed I cannot reproduce on my Mac but can on Linux. Regardless 
> they should be fixed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5216) Need to annotate public API in HCatalog

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805824#comment-13805824
 ] 

Hudson commented on HIVE-5216:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #521 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/521/])
HIVE-5216 : Need to annotate public API in HCatalog (Eugene Koifman via Thejas 
Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535535)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/HCatRecord.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/schema/HCatSchema.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatWriter.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatOutputFormat.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatStorer.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java


> Need to annotate public API in HCatalog
> ---
>
> Key: HIVE-5216
> URL: https://issues.apache.org/jira/browse/HIVE-5216
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5216.2.patch, HIVE-5216.patch
>
>
> need to annotate which API is considered public using something like
> @InterfaceAudience.Public
> @InterfaceStability.Evolving
> Currently this is what is considered (at a minimum) public API
> HCatLoader
> HCatStorer
> HCatInputFormat
> HCatOutputFormat
> HCatReader
> HCatWriter
> HCatRecord
> HCatSchema
> This is needed so that clients/dependent projects know which API they can 
> rely on and which can change w/o notice.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5619) Allow concat() to accept mixed string/binary args

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805818#comment-13805818
 ] 

Hudson commented on HIVE-5619:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #521 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/521/])
HIVE-5619 : Allow concat() to accept mixed string/binary args (Jason Dere via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535621)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_concat.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_concat.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java


> Allow concat() to accept mixed string/binary args
> -
>
> Key: HIVE-5619
> URL: https://issues.apache.org/jira/browse/HIVE-5619
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5619.1.patch
>
>
> concat() is currently strict about allowing either all binary or all 
> non-binary arguments. Loosen this to permit mixed params.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5482) JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be consistent with other modules

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805822#comment-13805822
 ] 

Hudson commented on HIVE-5482:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #521 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/521/])
HIVE-5482 : JDBC should depend on httpclient.version and httpcore.version 4.1.3 
to be consistent with other modules (Vaibhav Gumashta via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535620)
* /hive/trunk/ivy/libraries.properties


> JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be 
> consistent with other modules
> ---
>
> Key: HIVE-5482
> URL: https://issues.apache.org/jira/browse/HIVE-5482
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5482.1.patch
>
>
> Currently depends on 4.2.4 and 4.2.5 which conflicts with thrift-0.9 which 
> depends on 4.1.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805821#comment-13805821
 ] 

Hudson commented on HIVE-5403:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #521 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/521/])
HIVE-5403: Perflogger broken due to HIVE-5403 (Vikram Dixit K via Gunther 
Hagleitner) (gunther: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535598)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java


> Move loading of filesystem, ugi, metastore client to hive session
> -
>
> Key: HIVE-5403
> URL: https://issues.apache.org/jira/browse/HIVE-5403
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0
>
> Attachments: HIVE-5403.1.patch, HIVE-5403.2.patch, HIVE-5403.3.patch, 
> HIVE-5403.4.patch
>
>
> As part of HIVE-5184, the metastore connection, loading filesystem were done 
> as part of the tez session so as to speed up query times while paying a cost 
> at startup. We can do this more generally in hive to apply to both the 
> mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5577) Remove TestNegativeCliDriver script_broken_pipe1

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805820#comment-13805820
 ] 

Hudson commented on HIVE-5577:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #521 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/521/])
HIVE-5577 : Remove TestNegativeCliDriver script_broken_pipe1 (Brock Noland via 
Navis) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535643)
* /hive/trunk/ql/src/test/queries/clientnegative/script_broken_pipe1.q
* /hive/trunk/ql/src/test/results/clientnegative/script_broken_pipe1.q.out


> Remove TestNegativeCliDriver script_broken_pipe1
> 
>
> Key: HIVE-5577
> URL: https://issues.apache.org/jira/browse/HIVE-5577
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5577.1.patch.txt
>
>
> TestNegativeCliDriver script_broken_pipe1 is extremely flaky and not a 
> terribly important test. Let's remove it.
> Failures
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/junit/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/204/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5637) Sporadic minimr test failure

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805823#comment-13805823
 ] 

Hudson commented on HIVE-5637:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #521 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/521/])
HIVE-5637 : Sporadic minimr test failure (Navis via Ashutosh Chauhan) 
(hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535510)
* /hive/trunk/ql/src/test/queries/clientpositive/import_exported_table.q


> Sporadic minimr test failure
> 
>
> Key: HIVE-5637
> URL: https://issues.apache.org/jira/browse/HIVE-5637
> Project: Hive
>  Issue Type: Test
>  Components: Tests
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5637.1.patch.txt
>
>
> {noformat}
> ant test -Dtestcase=TestMinimrCliDriver 
> -Dqfile=import_exported_table.q,load_hdfs_file_with_space_in_the_name.q 
> -Dmodule=ql
> {noformat}
> Fails with message like this.
> {noformat}
> Begin query: load_hdfs_file_with_space_in_the_name.q
> mkdir: cannot create directory hdfs:///tmp/test/: File exists
> Exception: Client Execution failed with error code = -1
> See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
> more logs.
> junit.framework.AssertionFailedError: Client Execution failed with error code 
> = -1
> See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
> more logs.
> at junit.framework.Assert.fail(Assert.java:47)
> at 
> org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:349)
> at 
> org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_load_hdfs_file_with_space_in_the_name(TestMinimrCliDriver.java:291)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:154)
> at junit.framework.TestCase.runBare(TestCase.java:127)
> at junit.framework.TestResult$1.protect(TestResult.java:106)
> at junit.framework.TestResult.runProtected(TestResult.java:124)
> at junit.framework.TestResult.run(TestResult.java:109)
> at junit.framework.TestCase.run(TestCase.java:118)
> at junit.framework.TestSuite.runTest(TestSuite.java:208)
> at junit.framework.TestSuite.run(TestSuite.java:203)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5628) ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with Test not end with it

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805722#comment-13805722
 ] 

Hudson commented on HIVE-5628:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2421 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2421/])
HIVE-5628 : ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should 
start with Test not end with it (Brock Noland via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535761)
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/DynamicMultiDimeCollectionTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPrunnerTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestDynamicMultiDimeCollection.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestListBucketingPrunner.java


> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with 
> Test not end with it
> --
>
> Key: HIVE-5628
> URL: https://issues.apache.org/jira/browse/HIVE-5628
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5628.patch
>
>
> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest will not be run 
> by PTest because they end with Test and PTest requires tests start with Test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5577) Remove TestNegativeCliDriver script_broken_pipe1

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805379#comment-13805379
 ] 

Hudson commented on HIVE-5577:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2420 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2420/])
HIVE-5577 : Remove TestNegativeCliDriver script_broken_pipe1 (Brock Noland via 
Navis) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535643)
* /hive/trunk/ql/src/test/queries/clientnegative/script_broken_pipe1.q
* /hive/trunk/ql/src/test/results/clientnegative/script_broken_pipe1.q.out


> Remove TestNegativeCliDriver script_broken_pipe1
> 
>
> Key: HIVE-5577
> URL: https://issues.apache.org/jira/browse/HIVE-5577
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5577.1.patch.txt
>
>
> TestNegativeCliDriver script_broken_pipe1 is extremely flaky and not a 
> terribly important test. Let's remove it.
> Failures
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/junit/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
> https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/204/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805380#comment-13805380
 ] 

Hudson commented on HIVE-5403:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2420 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2420/])
HIVE-5403: Perflogger broken due to HIVE-5403 (Vikram Dixit K via Gunther 
Hagleitner) (gunther: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535598)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java


> Move loading of filesystem, ugi, metastore client to hive session
> -
>
> Key: HIVE-5403
> URL: https://issues.apache.org/jira/browse/HIVE-5403
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0
>
> Attachments: HIVE-5403.1.patch, HIVE-5403.2.patch, HIVE-5403.3.patch, 
> HIVE-5403.4.patch
>
>
> As part of HIVE-5184, the metastore connection, loading filesystem were done 
> as part of the tez session so as to speed up query times while paying a cost 
> at startup. We can do this more generally in hive to apply to both the 
> mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5637) Sporadic minimr test failure

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805382#comment-13805382
 ] 

Hudson commented on HIVE-5637:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2420 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2420/])
HIVE-5637 : Sporadic minimr test failure (Navis via Ashutosh Chauhan) 
(hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535510)
* /hive/trunk/ql/src/test/queries/clientpositive/import_exported_table.q


> Sporadic minimr test failure
> 
>
> Key: HIVE-5637
> URL: https://issues.apache.org/jira/browse/HIVE-5637
> Project: Hive
>  Issue Type: Test
>  Components: Tests
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5637.1.patch.txt
>
>
> {noformat}
> ant test -Dtestcase=TestMinimrCliDriver 
> -Dqfile=import_exported_table.q,load_hdfs_file_with_space_in_the_name.q 
> -Dmodule=ql
> {noformat}
> Fails with message like this.
> {noformat}
> Begin query: load_hdfs_file_with_space_in_the_name.q
> mkdir: cannot create directory hdfs:///tmp/test/: File exists
> Exception: Client Execution failed with error code = -1
> See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
> more logs.
> junit.framework.AssertionFailedError: Client Execution failed with error code 
> = -1
> See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
> more logs.
> at junit.framework.Assert.fail(Assert.java:47)
> at 
> org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:349)
> at 
> org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_load_hdfs_file_with_space_in_the_name(TestMinimrCliDriver.java:291)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:154)
> at junit.framework.TestCase.runBare(TestCase.java:127)
> at junit.framework.TestResult$1.protect(TestResult.java:106)
> at junit.framework.TestResult.runProtected(TestResult.java:124)
> at junit.framework.TestResult.run(TestResult.java:109)
> at junit.framework.TestCase.run(TestCase.java:118)
> at junit.framework.TestSuite.runTest(TestSuite.java:208)
> at junit.framework.TestSuite.run(TestSuite.java:203)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5619) Allow concat() to accept mixed string/binary args

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805377#comment-13805377
 ] 

Hudson commented on HIVE-5619:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2420 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2420/])
HIVE-5619 : Allow concat() to accept mixed string/binary args (Jason Dere via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535621)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_concat.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_concat.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java


> Allow concat() to accept mixed string/binary args
> -
>
> Key: HIVE-5619
> URL: https://issues.apache.org/jira/browse/HIVE-5619
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5619.1.patch
>
>
> concat() is currently strict about allowing either all binary or all 
> non-binary arguments. Loosen this to permit mixed params.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5629) Fix two javadoc failures in HCatalog

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805378#comment-13805378
 ] 

Hudson commented on HIVE-5629:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2420 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2420/])
HIVE-5629 : Fix two javadoc failures in HCatalog (Brock Noland via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535513)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/InputJobInfo.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java


> Fix two javadoc failures in HCatalog
> 
>
> Key: HIVE-5629
> URL: https://issues.apache.org/jira/browse/HIVE-5629
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5629.patch
>
>
> I am seeing two javadoc failures on HCatalog. These are not being seen by 
> PTest and indeed I cannot reproduce on my Mac but can on Linux. Regardless 
> they should be fixed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5482) JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be consistent with other modules

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805381#comment-13805381
 ] 

Hudson commented on HIVE-5482:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2420 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2420/])
HIVE-5482 : JDBC should depend on httpclient.version and httpcore.version 4.1.3 
to be consistent with other modules (Vaibhav Gumashta via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535620)
* /hive/trunk/ivy/libraries.properties


> JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be 
> consistent with other modules
> ---
>
> Key: HIVE-5482
> URL: https://issues.apache.org/jira/browse/HIVE-5482
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5482.1.patch
>
>
> Currently depends on 4.2.4 and 4.2.5 which conflicts with thrift-0.9 which 
> depends on 4.1.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5216) Need to annotate public API in HCatalog

2013-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805383#comment-13805383
 ] 

Hudson commented on HIVE-5216:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2420 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2420/])
HIVE-5216 : Need to annotate public API in HCatalog (Eugene Koifman via Thejas 
Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535535)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/HCatRecord.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/schema/HCatSchema.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatWriter.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatOutputFormat.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatStorer.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java


> Need to annotate public API in HCatalog
> ---
>
> Key: HIVE-5216
> URL: https://issues.apache.org/jira/browse/HIVE-5216
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5216.2.patch, HIVE-5216.patch
>
>
> need to annotate which API is considered public using something like
> @InterfaceAudience.Public
> @InterfaceStability.Evolving
> Currently this is what is considered (at a minimum) public API
> HCatLoader
> HCatStorer
> HCatInputFormat
> HCatOutputFormat
> HCatReader
> HCatWriter
> HCatRecord
> HCatSchema
> This is needed so that clients/dependent projects know which API they can 
> rely on and which can change w/o notice.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5625) Fix issue with metastore version restriction test.

2013-10-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805079#comment-13805079
 ] 

Hudson commented on HIVE-5625:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #520 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/520/])
HIVE-5625 - Fix issue with metastore version restriction test. (Vikram Dixit K 
via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535402)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java


> Fix issue with metastore version restriction test.
> --
>
> Key: HIVE-5625
> URL: https://issues.apache.org/jira/browse/HIVE-5625
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.13.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0
>
> Attachments: HIVE-5625.1.patch
>
>
> Based on Brock's comments, the change made in HIVE-5403 change the nature of 
> the test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5625) Fix issue with metastore version restriction test.

2013-10-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805066#comment-13805066
 ] 

Hudson commented on HIVE-5625:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2419 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2419/])
HIVE-5625 - Fix issue with metastore version restriction test. (Vikram Dixit K 
via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535402)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java


> Fix issue with metastore version restriction test.
> --
>
> Key: HIVE-5625
> URL: https://issues.apache.org/jira/browse/HIVE-5625
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.13.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0
>
> Attachments: HIVE-5625.1.patch
>
>
> Based on Brock's comments, the change made in HIVE-5403 change the nature of 
> the test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   3   4   5   6   7   8   9   10   >