Build failed in Hudson: Hive-trunk-h0.20 #463

2011-01-02 Thread Apache Hudson Server
See 

--
[...truncated 5886 lines...]
[junit] o[2] class = class java.util.HashMap
[junit] o = [234, [firstString, secondString], {firstKey=1, secondKey=2}, 
-234, 1.0, -2.5]
[junit] Testing protocol: org.apache.thrift.protocol.TBinaryProtocol
[junit] TypeName = 
struct<_hello:int,2bye:array,another:map,nhello:int,d:double,nd:double>
[junit] bytes 
=x08xffxffx00x00x00xeax0fxffxfex0bx00x00x00x02x00x00x00x0bx66x69x72x73x74x53x74x72x69x6ex67x00x00x00x0cx73x65x63x6fx6ex64x53x74x72x69x6ex67x0dxffxfdx0bx08x00x00x00x02x00x00x00x08x66x69x72x73x74x4bx65x79x00x00x00x01x00x00x00x09x73x65x63x6fx6ex64x4bx65x79x00x00x00x02x08xffxfcxffxffxffx16x04xffxfbx3fxf0x00x00x00x00x00x00x04xffxfaxc0x04x00x00x00x00x00x00x00
[junit] o class = class java.util.ArrayList
[junit] o size = 6
[junit] o[0] class = class java.lang.Integer
[junit] o[1] class = class java.util.ArrayList
[junit] o[2] class = class java.util.HashMap
[junit] o = [234, [firstString, secondString], {firstKey=1, secondKey=2}, 
-234, 1.0, -2.5]
[junit] Testing protocol: org.apache.thrift.protocol.TJSONProtocol
[junit] TypeName = 
struct<_hello:int,2bye:array,another:map,nhello:int,d:double,nd:double>
[junit] bytes 
=x7bx22x2dx31x22x3ax7bx22x69x33x32x22x3ax32x33x34x7dx2cx22x2dx32x22x3ax7bx22x6cx73x74x22x3ax5bx22x73x74x72x22x2cx32x2cx22x66x69x72x73x74x53x74x72x69x6ex67x22x2cx22x73x65x63x6fx6ex64x53x74x72x69x6ex67x22x5dx7dx2cx22x2dx33x22x3ax7bx22x6dx61x70x22x3ax5bx22x73x74x72x22x2cx22x69x33x32x22x2cx32x2cx7bx22x66x69x72x73x74x4bx65x79x22x3ax31x2cx22x73x65x63x6fx6ex64x4bx65x79x22x3ax32x7dx5dx7dx2cx22x2dx34x22x3ax7bx22x69x33x32x22x3ax2dx32x33x34x7dx2cx22x2dx35x22x3ax7bx22x64x62x6cx22x3ax31x2ex30x7dx2cx22x2dx36x22x3ax7bx22x64x62x6cx22x3ax2dx32x2ex35x7dx7d
[junit] bytes in text 
={"-1":{"i32":234},"-2":{"lst":["str",2,"firstString","secondString"]},"-3":{"map":["str","i32",2,{"firstKey":1,"secondKey":2}]},"-4":{"i32":-234},"-5":{"dbl":1.0},"-6":{"dbl":-2.5}}
[junit] o class = class java.util.ArrayList
[junit] o size = 6
[junit] o[0] class = class java.lang.Integer
[junit] o[1] class = class java.util.ArrayList
[junit] o[2] class = class java.util.HashMap
[junit] o = [234, [firstString, secondString], {firstKey=1, secondKey=2}, 
-234, 1.0, -2.5]
[junit] Testing protocol: 
org.apache.hadoop.hive.serde2.thrift.TCTLSeparatedProtocol
[junit] TypeName = 
struct<_hello:int,2bye:array,another:map,nhello:int,d:double,nd:double>
[junit] bytes 
=x32x33x34x01x66x69x72x73x74x53x74x72x69x6ex67x02x73x65x63x6fx6ex64x53x74x72x69x6ex67x01x66x69x72x73x74x4bx65x79x03x31x02x73x65x63x6fx6ex64x4bx65x79x03x32x01x2dx32x33x34x01x31x2ex30x01x2dx32x2ex35
[junit] bytes in text 
=234firstStringsecondStringfirstKey1secondKey2-2341.0-2.5
[junit] o class = class java.util.ArrayList
[junit] o size = 6
[junit] o[0] class = class java.lang.Integer
[junit] o[1] class = class java.util.ArrayList
[junit] o[2] class = class java.util.HashMap
[junit] o = [234, [firstString, secondString], {firstKey=1, secondKey=2}, 
-234, 1.0, -2.5]
[junit] Beginning Test testTBinarySortableProtocol:
[junit] Testing struct test { double hello}
[junit] Testing struct test { i32 hello}
[junit] Testing struct test { i64 hello}
[junit] Testing struct test { string hello}
[junit] Testing struct test { string hello, double another}
[junit] Test testTBinarySortableProtocol passed!
[junit] bytes in text =234  firstStringsecondString
firstKey1secondKey2>
[junit] compare to=234  firstStringsecondString
firstKey1secondKey2>
[junit] o class = class java.util.ArrayList
[junit] o size = 3
[junit] o[0] class = class java.lang.Integer
[junit] o[1] class = class java.util.ArrayList
[junit] o[2] class = class java.util.HashMap
[junit] o = [234, [firstString, secondString], {firstKey=1, secondKey=2}]
[junit] bytes in text =234  firstStringsecondString
firstKey1secondKey2>
[junit] compare to=234  firstStringsecondString
firstKey1secondKey2>
[junit] o class = class java.util.ArrayList
[junit] o size = 3
[junit] o = [234, null, {firstKey=1, secondKey=2}]
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 0.847 sec
[junit] Running org.apache.hadoop.hive.serde2.lazy.TestLazyArrayMapStruct
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.484 sec
[junit] Running org.apache.hadoop.hive.serde2.lazy.TestLazyPrimitive
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.438 sec
[junit] Running org.apache.hadoop.hive.serde2.lazy.TestLazySimpleSerDe
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.606 sec
[junit] Running org.apache.hadoop.hive.serde2.lazybinary.TestLazyBinarySerDe
[junit] Beginning Test TestLazyBinarySerD

[jira] Created: (HIVE-1874) fix HBase filter pushdown broken by HIVE-1638

2011-01-02 Thread John Sichi (JIRA)
fix HBase filter pushdown broken by HIVE-1638
-

 Key: HIVE-1874
 URL: https://issues.apache.org/jira/browse/HIVE-1874
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.7.0
Reporter: John Sichi
 Fix For: 0.7.0


See comments at end of HIVE-1660 for what happened.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1660) Change get_partitions_ps to pass partition filter to database

2011-01-02 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12976605#action_12976605
 ] 

John Sichi commented on HIVE-1660:
--

Amendment is in HIVE-1874.


> Change get_partitions_ps to pass partition filter to database
> -
>
> Key: HIVE-1660
> URL: https://issues.apache.org/jira/browse/HIVE-1660
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 0.7.0
>Reporter: Ajay Kidave
>Assignee: Paul Yang
> Fix For: 0.7.0
>
> Attachments: HIVE-1660.1.patch, HIVE-1660.2.patch, HIVE-1660.3.patch, 
> HIVE-1660.4.patch, HIVE-1660_regex.patch
>
>
> Support for doing partition pruning by passing the partition filter to the 
> database is added by HIVE-1609. Changing get_partitions_ps to use this could 
> result in performance improvement  for tables having large number of 
> partitions. A listPartitionNamesByFilter API might be required for 
> implementing this for use from Hive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1874) fix HBase filter pushdown broken by HIVE-1638

2011-01-02 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-1874:
-

Attachment: HIVE-1874.1.patch

> fix HBase filter pushdown broken by HIVE-1638
> -
>
> Key: HIVE-1874
> URL: https://issues.apache.org/jira/browse/HIVE-1874
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.7.0
>Reporter: John Sichi
> Fix For: 0.7.0
>
> Attachments: HIVE-1874.1.patch
>
>
> See comments at end of HIVE-1660 for what happened.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HIVE-1874) fix HBase filter pushdown broken by HIVE-1638

2011-01-02 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi reassigned HIVE-1874:


Assignee: John Sichi

> fix HBase filter pushdown broken by HIVE-1638
> -
>
> Key: HIVE-1874
> URL: https://issues.apache.org/jira/browse/HIVE-1874
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.7.0
>Reporter: John Sichi
>Assignee: John Sichi
> Fix For: 0.7.0
>
> Attachments: HIVE-1874.1.patch
>
>
> See comments at end of HIVE-1660 for what happened.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1874) fix HBase filter pushdown broken by HIVE-1638

2011-01-02 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-1874:
-

Status: Patch Available  (was: Open)

> fix HBase filter pushdown broken by HIVE-1638
> -
>
> Key: HIVE-1874
> URL: https://issues.apache.org/jira/browse/HIVE-1874
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.7.0
>Reporter: John Sichi
>Assignee: John Sichi
> Fix For: 0.7.0
>
> Attachments: HIVE-1874.1.patch
>
>
> See comments at end of HIVE-1660 for what happened.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1872) Hive process is exiting on executing ALTER query

2011-01-02 Thread Bharath R (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12976628#action_12976628
 ] 

Bharath R  commented on HIVE-1872:
--

As per the "Alter" query , two DDL tasks will be created. 
When the first task execution is failed, the "taskCleanup" is internally 
stopping the VM. 


{code}


if (running.size() != 0) {
taskCleanup();
}

public void taskCleanup()
{
// The currently existing Shutdown hooks will be automatically 
called, 
// killing the map-reduce processes. 
// The non MR processes will be killed as well.
System.exit(9);
}

{code}

Here, the System.exit(9) is used to kill all the "map-reduce" and "non-MR" 
process. 
Instead of killing all the MR and non-MR process , we can kill only the current 
running processes 

The update code will be ,. 

{code}
public void taskCleanup(Map running) {

// get all the remaining running process , 
Set < Entry < TaskResult, TaskRunner >> entrySet = running.entrySet 
();
for(Entry  entry : entrySet)
{
TaskRunner value = entry.getValue ();
// Interrupt. 
value.interrupt ();
}
}
{code}

This will internally kills the task ( which need not required to be exeuted ).  

> Hive process is exiting on executing ALTER query
> 
>
> Key: HIVE-1872
> URL: https://issues.apache.org/jira/browse/HIVE-1872
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Server Infrastructure
>Affects Versions: 0.6.0
> Environment: SUSE Linux Enterprise Server 10 SP2 (i586) - Kernel 
> 2.6.16.60-0.21-smp (3)
> Hadoop 0.20.1
> Hive 0.6.0
>Reporter: Bharath R 
>
> Hive process is exiting on executing the below queries in the same order as 
> mentioned
> 1) CREATE TABLE SAMPLETABLE(IP STRING , showtime BIGINT ) partitioned by (ds 
> string,ipz int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\040'
> 2) ALTER TABLE SAMPLETABLE add Partition(ds='sf') location 
> '/user/hive/warehouse' Partition(ipz=100) location '/user/hive/warehouse'
> After the second query execution , the hive throws the below exception and 
> exiting the process
> 10:09:03 ERROR exec.DDLTask: FAILED: Error in metadata: table is partitioned 
> but partition spec is not specified or tab: {ipz=100}
> org.apache.hadoop.hive.ql.metadata.HiveException: table is partitioned but 
> partition spec is not specified or tab: {ipz=100}
> at 
> org.apache.hadoop.hive.ql.metadata.Table.isValidSpec(Table.java:341)
> at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:902)
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.addPartition(DDLTask.java:282)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:191)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
> at 
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:114)
> at 
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.process(ThriftHive.java:378)
> at 
> org.apache.hadoop.hive.service.ThriftHive$Processor.process(ThriftHive.java:366)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:252)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> As the "alter" query is incorrect the exception was thrown, ideally it should 
> be "ALTER TABLE SAMPLETABLE add Partition(ds='sf',ipz=100) location 
> '/user/hive/warehouse'". 
> It is not good to exit the HIVE process when the query is incorrect.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.