[ 
https://issues.apache.org/jira/browse/HIVE-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12976628#action_12976628
 ] 

Bharath R  commented on HIVE-1872:
----------------------------------

As per the "Alter" query , two DDL tasks will be created. 
When the first task execution is failed, the "taskCleanup" is internally 
stopping the VM. 


{code}


if (running.size() != 0) {
                taskCleanup();
}

public void taskCleanup()
{
            // The currently existing Shutdown hooks will be automatically 
called, 
            // killing the map-reduce processes. 
            // The non MR processes will be killed as well.
            System.exit(9);
}

{code}

Here, the System.exit(9) is used to kill all the "map-reduce" and "non-MR" 
process. 
Instead of killing all the MR and non-MR process , we can kill only the current 
running processes 

The update code will be ,. 

{code}
public void taskCleanup(Map<TaskResult, TaskRunner> running) {
        
            // get all the remaining running process , 
            Set < Entry < TaskResult, TaskRunner >> entrySet = running.entrySet 
();
            for(Entry <TaskResult, TaskRunner> entry : entrySet)
            {
                TaskRunner value = entry.getValue ();
                // Interrupt. 
                value.interrupt ();
            }
        }
{code}

This will internally kills the task ( which need not required to be exeuted ).  

> Hive process is exiting on executing ALTER query
> ------------------------------------------------
>
>                 Key: HIVE-1872
>                 URL: https://issues.apache.org/jira/browse/HIVE-1872
>             Project: Hive
>          Issue Type: Bug
>          Components: CLI, Server Infrastructure
>    Affects Versions: 0.6.0
>         Environment: SUSE Linux Enterprise Server 10 SP2 (i586) - Kernel 
> 2.6.16.60-0.21-smp (3)
> Hadoop 0.20.1
> Hive 0.6.0
>            Reporter: Bharath R 
>
> Hive process is exiting on executing the below queries in the same order as 
> mentioned
> 1) CREATE TABLE SAMPLETABLE(IP STRING , showtime BIGINT ) partitioned by (ds 
> string,ipz int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\040'
> 2) ALTER TABLE SAMPLETABLE add Partition(ds='sf') location 
> '/user/hive/warehouse' Partition(ipz=100) location '/user/hive/warehouse'
> After the second query execution , the hive throws the below exception and 
> exiting the process
> 10:09:03 ERROR exec.DDLTask: FAILED: Error in metadata: table is partitioned 
> but partition spec is not specified or tab: {ipz=100}
> org.apache.hadoop.hive.ql.metadata.HiveException: table is partitioned but 
> partition spec is not specified or tab: {ipz=100}
>         at 
> org.apache.hadoop.hive.ql.metadata.Table.isValidSpec(Table.java:341)
>         at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:902)
>         at 
> org.apache.hadoop.hive.ql.exec.DDLTask.addPartition(DDLTask.java:282)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:191)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>         at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>         at 
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:114)
>         at 
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.process(ThriftHive.java:378)
>         at 
> org.apache.hadoop.hive.service.ThriftHive$Processor.process(ThriftHive.java:366)
>         at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:252)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:619)
> As the "alter" query is incorrect the exception was thrown, ideally it should 
> be "ALTER TABLE SAMPLETABLE add Partition(ds='sf',ipz=100) location 
> '/user/hive/warehouse'". 
> It is not good to exit the HIVE process when the query is incorrect.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to