[jira] [Updated] (HIVE-17498) Does hive have mr-nativetask support refer to MAPREDUCE-2841

2017-09-11 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-17498:
-
Description: 
I try to implement a HivePlatform extends 
org.apache.hadoop.mapred.nativetask.Platform.
{code}
/**
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package org.apache.hadoop.mapred.nativetask;

import org.apache.hadoop.hive.ql.io.HiveKey;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.nativetask.serde.INativeSerializer;
import org.apache.log4j.Logger;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class HivePlatform extends Platform {

  private static final Logger LOG = Logger.getLogger(HivePlatform.class);

  public HivePlatform() {
  }

  @Override
  public void init() throws IOException {
registerKey("org.apache.hadoop.hive.ql.io.HiveKey", 
HiveKeySerializer.class);
LOG.info("Hive platform inited");
  }

  @Override
  public String name() {
return "Hive";
  }

  @Override
  public boolean support(String keyClassName, INativeSerializer serializer, 
JobConf job) {
if (keyClassNames.contains(keyClassName) && serializer instanceof 
INativeComparable) {
  String nativeComparator = Constants.NATIVE_MAPOUT_KEY_COMPARATOR + "." + 
keyClassName;
  job.set(nativeComparator, "HivePlatform.HivePlatform::HiveKeyComparator");
  if (job.get(Constants.NATIVE_CLASS_LIBRARY_BUILDIN) == null) {
job.set(Constants.NATIVE_CLASS_LIBRARY_BUILDIN, 
"HivePlatform=libnativetask.so");
  }
  return true;
} else {
  return false;
}
  }

  @Override
  public boolean define(Class comparatorClass) {
return false;
  }

  public static class HiveKeySerializer implements INativeComparable, 
INativeSerializer {

public HiveKeySerializer() throws ClassNotFoundException, 
SecurityException, NoSuchMethodException {
}

@Override
public int getLength(HiveKey w) throws IOException {
  return 4 + w.getLength();
}

@Override
public void serialize(HiveKey w, DataOutput out) throws IOException {
  w.write(out);
}

@Override
public void deserialize(DataInput in, int length, HiveKey w ) throws 
IOException {
  w.readFields(in);
}
  }
}
{code}
and throws exceptions:
{code}
Error: java.io.IOException: Initialization of all the collectors failed. Error 
in last collector was :Native output collector cannot be loaded; at 
org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:415) at 
org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:442) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1700)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
java.io.IOException: Native output collector cannot be loaded; at 
org.apache.hadoop.mapred.nativetask.NativeMapOutputCollectorDelegator.init(NativeMapOutputCollectorDelegator.java:165)
 at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402) 
... 7 more Caused by: java.io.IOException: /PartitionBucket.h:56:pool is NULL, 
or comparator is not set 
/usr/local/hadoop-2.7.3-yarn/lib/native/libnativetask.so.1.0.0(_ZN10NativeTask15HadoopExceptionC2ERKSs+0x76)
 [0x7ffdcbba6436] 
/usr/local/hadoop-2.7.3-yarn/lib/native/libnativetask.so.1.0.0(_ZN10NativeTask18MapOutputCollector4initEjjPFiPKcjS2_jEPNS_14ICombineRunnerE+0x36a)
 [0x7ffdcbb9ad6a] 
/usr/local/hadoop-2.7.3-yarn/lib/native/libnativetask.so.1.0.0(_ZN10NativeTask18MapOutputCollector9configureEPNS_6ConfigE+0x24a)
 [0x7ffdcbb9b37a] 
/usr/local/hadoop-2.7.3-yarn/lib/native/libnativetask.so.1.0.0(_ZN10NativeTask23MCollectorOutputHandler9configureEPNS_6ConfigE+0x80)
 [0x7ffdcbb91b40] 
/usr/local/hadoop-2.7.3-yarn/lib/native/libnativetask.so.1.0.0(_ZN10NativeTask12BatchHandler7onSetupEPNS_6ConfigEPcjS3_j+0xe9)
 [0x7ffdcbb90b29] 

[jira] [Commented] (HIVE-8251) An error occurred when trying to close the Operator running your custom script.

2016-06-03 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314087#comment-15314087
 ] 

Feng Yuan commented on HIVE-8251:
-

hi @Navis,i try this and it didnt work.

> An error occurred when trying to close the Operator running your custom 
> script.
> ---
>
> Key: HIVE-8251
> URL: https://issues.apache.org/jira/browse/HIVE-8251
> Project: Hive
>  Issue Type: Bug
>  Components: Contrib, Query Processor
>Affects Versions: 0.12.0
> Environment: MapR distribution
>Reporter: someshwar kale
> Attachments: HIVE-8251.1.patch.txt
>
>
> We are trying to plugin custom map reduce to our hive , but facing the error 
> as below-
> java.lang.RuntimeException: Hive Runtime Error while closing operators
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:240)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.mapred.Child.main(Child.java:264)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20003]: 
> An error occurred when trying to close the Operator running your custom 
> script.
> at 
> org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:514)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:613)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:613)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:613)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:207)
> ... 8 more
> FAILED: Execution Error, return code 20003 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask. An error occurred when trying 
> to close the Operator running your custom script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13781) Tez Job failed with FileNotFoundException when partition dir doesnt exists

2016-05-19 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290604#comment-15290604
 ] 

Feng Yuan edited comment on HIVE-13781 at 5/19/16 7:30 AM:
---

Hi [~ashutoshc],[~prasanth_j],[~vikram.dixit],[~sershe],[~gopalv]when in 
mr,this issue work well,should tez complete this feature?
detail:
When the metadata partition information and storage directory is divided(some 
dir doesnt exists.)
mr will go through this issue.I mean since hive 2.0 recommend tez why not we 
build it more compatible for our bussiness work?


was (Author: feng yuan):
Hi [~ashutoshc],when in mr,this issue work well,should tez complete this 
feature?
detail:
When the metadata partition information and storage directory is divided(some 
dir doesnt exists.)
mr will go through this issue.I mean since hive 2.0 recommend tez why not we 
build it more compatible for our bussiness work?

> Tez Job failed with FileNotFoundException when partition dir doesnt exists 
> ---
>
> Key: HIVE-13781
> URL: https://issues.apache.org/jira/browse/HIVE-13781
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 0.14.0, 2.0.0
> Environment: hive 0.14.0 ,tez-0.5.2,hadoop 2.6.0
>Reporter: Feng Yuan
>
> when i have a partitioned table a with partition "day",in metadata a have 
> partition day: 20160501,20160502,but partition 20160501's dir didnt exits.
> so when i use tez engine to run hive -e "select day,count(*) from a where 
> xx=xx group by day"
> hive throws FileNotFoundException.
> but mr work.
> repo eg:
> CREATE EXTERNAL TABLE `a`(
>   `a` string)
> PARTITIONED BY ( 
>   `l_date` string);
> insert overwrite table a partition(l_date='2016-04-08') values (1),(2);
> insert overwrite table a partition(l_date='2016-04-09') values (1),(2);
> hadoop dfs -rm -r -f /warehouse/a/l_date=2016-04-09
> select l_date,count(*) from a where a='1' group by l_date;
> error:
> ut: a initializer failed, vertex=vertex_1463493135662_10445_1_00 [Map 1], 
> org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: 
> hdfs://bfdhadoopcool/warehouse/test.db/a/l_date=2015-04-09
>   at 
> org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
>   at 
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
>   at 
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:300)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:402)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:129)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13781) Tez Job failed with FileNotFoundException when partition dir doesnt exists

2016-05-19 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290604#comment-15290604
 ] 

Feng Yuan edited comment on HIVE-13781 at 5/19/16 7:21 AM:
---

Hi [~ashutoshc],when in mr,this issue work well,should tez complete this 
feature?
detail:
When the metadata partition information and storage directory is divided(some 
dir doesnt exists.)
mr will go through this issue.I mean since hive 2.0 recommend tez why not we 
build it more compatible for our bussiness work?


was (Author: feng yuan):
hi [~ashutoshc],when in mr,this issue work well,should tez complete this 
feature?

> Tez Job failed with FileNotFoundException when partition dir doesnt exists 
> ---
>
> Key: HIVE-13781
> URL: https://issues.apache.org/jira/browse/HIVE-13781
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 0.14.0, 2.0.0
> Environment: hive 0.14.0 ,tez-0.5.2,hadoop 2.6.0
>Reporter: Feng Yuan
>
> when i have a partitioned table a with partition "day",in metadata a have 
> partition day: 20160501,20160502,but partition 20160501's dir didnt exits.
> so when i use tez engine to run hive -e "select day,count(*) from a where 
> xx=xx group by day"
> hive throws FileNotFoundException.
> but mr work.
> repo eg:
> CREATE EXTERNAL TABLE `a`(
>   `a` string)
> PARTITIONED BY ( 
>   `l_date` string);
> insert overwrite table a partition(l_date='2016-04-08') values (1),(2);
> insert overwrite table a partition(l_date='2016-04-09') values (1),(2);
> hadoop dfs -rm -r -f /warehouse/a/l_date=2016-04-09
> select l_date,count(*) from a where a='1' group by l_date;
> error:
> ut: a initializer failed, vertex=vertex_1463493135662_10445_1_00 [Map 1], 
> org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: 
> hdfs://bfdhadoopcool/warehouse/test.db/a/l_date=2015-04-09
>   at 
> org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
>   at 
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
>   at 
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:300)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:402)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:129)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13781) Tez Job failed with FileNotFoundException when partition dir doesnt exists

2016-05-19 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290604#comment-15290604
 ] 

Feng Yuan commented on HIVE-13781:
--

hi [~ashutoshc],when in mr,this issue work well,should tez complete this 
feature?

> Tez Job failed with FileNotFoundException when partition dir doesnt exists 
> ---
>
> Key: HIVE-13781
> URL: https://issues.apache.org/jira/browse/HIVE-13781
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 0.14.0, 2.0.0
> Environment: hive 0.14.0 ,tez-0.5.2,hadoop 2.6.0
>Reporter: Feng Yuan
>
> when i have a partitioned table a with partition "day",in metadata a have 
> partition day: 20160501,20160502,but partition 20160501's dir didnt exits.
> so when i use tez engine to run hive -e "select day,count(*) from a where 
> xx=xx group by day"
> hive throws FileNotFoundException.
> but mr work.
> repo eg:
> CREATE EXTERNAL TABLE `a`(
>   `a` string)
> PARTITIONED BY ( 
>   `l_date` string);
> insert overwrite table a partition(l_date='2016-04-08') values (1),(2);
> insert overwrite table a partition(l_date='2016-04-09') values (1),(2);
> hadoop dfs -rm -r -f /warehouse/a/l_date=2016-04-09
> select l_date,count(*) from a where a='1' group by l_date;
> error:
> ut: a initializer failed, vertex=vertex_1463493135662_10445_1_00 [Map 1], 
> org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: 
> hdfs://bfdhadoopcool/warehouse/test.db/a/l_date=2015-04-09
>   at 
> org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
>   at 
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
>   at 
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:300)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:402)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:129)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13781) Tez Job failed with FileNotFoundException when partition dir doesnt exists

2016-05-19 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-13781:
-
Affects Version/s: 2.0.0

> Tez Job failed with FileNotFoundException when partition dir doesnt exists 
> ---
>
> Key: HIVE-13781
> URL: https://issues.apache.org/jira/browse/HIVE-13781
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 0.14.0, 2.0.0
> Environment: hive 0.14.0 ,tez-0.5.2,hadoop 2.6.0
>Reporter: Feng Yuan
>
> when i have a partitioned table a with partition "day",in metadata a have 
> partition day: 20160501,20160502,but partition 20160501's dir didnt exits.
> so when i use tez engine to run hive -e "select day,count(*) from a where 
> xx=xx group by day"
> hive throws FileNotFoundException.
> but mr work.
> repo eg:
> CREATE EXTERNAL TABLE `a`(
>   `a` string)
> PARTITIONED BY ( 
>   `l_date` string);
> insert overwrite table a partition(l_date='2016-04-08') values (1),(2);
> insert overwrite table a partition(l_date='2016-04-09') values (1),(2);
> hadoop dfs -rm -r -f /warehouse/a/l_date=2016-04-09
> select l_date,count(*) from a where a='1' group by l_date;
> error:
> ut: a initializer failed, vertex=vertex_1463493135662_10445_1_00 [Map 1], 
> org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: 
> hdfs://bfdhadoopcool/warehouse/test.db/a/l_date=2015-04-09
>   at 
> org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
>   at 
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
>   at 
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:300)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:402)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:129)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10729) Query failed when select complex columns from joinned table (tez map join only)

2016-05-10 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277764#comment-15277764
 ] 

Feng Yuan commented on HIVE-10729:
--

is there patch for 0.14.0?

> Query failed when select complex columns from joinned table (tez map join 
> only)
> ---
>
> Key: HIVE-10729
> URL: https://issues.apache.org/jira/browse/HIVE-10729
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.0
>Reporter: Selina Zhang
>Assignee: Matt McCline
> Fix For: 1.3.0, 2.1.0, 2.0.1
>
> Attachments: HIVE-10729.03.patch, HIVE-10729.04.patch, 
> HIVE-10729.05.patch, HIVE-10729.1.patch, HIVE-10729.2.patch
>
>
> When map join happens, if projection columns include complex data types, 
> query will fail. 
> Steps to reproduce:
> {code:sql}
> hive> set hive.auto.convert.join;
> hive.auto.convert.join=true
> hive> desc foo;
> a array
> hive> select * from foo;
> [1,2]
> hive> desc src_int;
> key   int
> value string
> hive> select * from src_int where key=2;
> 2val_2
> hive> select * from foo join src_int src  on src.key = foo.a[1];
> {code}
> Query will fail with stack trace
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryArray cannot be cast to 
> [Ljava.lang.Object;
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector.getList(StandardListObjectInspector.java:111)
>   at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serialize(LazySimpleSerDe.java:314)
>   at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serializeField(LazySimpleSerDe.java:262)
>   at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.doSerialize(LazySimpleSerDe.java:246)
>   at 
> org.apache.hadoop.hive.serde2.AbstractEncodingAwareSerDe.serialize(AbstractEncodingAwareSerDe.java:50)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:692)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
>   at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:644)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:676)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:754)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:386)
>   ... 23 more
> {noformat}
> Similar error when projection columns include a map:
> {code:sql}
> hive> CREATE TABLE test (a INT, b MAP) STORED AS ORC;
> hive> INSERT OVERWRITE TABLE test SELECT 1, MAP(1, "val_1", 2, "val_2") FROM 
> src LIMIT 1;
> hive> select * from src join test where src.key=test.a;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11051) Hive 1.2.0 MapJoin w/Tez - LazyBinaryArray cannot be cast to [Ljava.lang.Object;

2016-05-10 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277763#comment-15277763
 ] 

Feng Yuan commented on HIVE-11051:
--

is there patch for 0.14.0?

> Hive 1.2.0  MapJoin w/Tez - LazyBinaryArray cannot be cast to 
> [Ljava.lang.Object;
> -
>
> Key: HIVE-11051
> URL: https://issues.apache.org/jira/browse/HIVE-11051
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers, Tez
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Greg Senia
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-11051.01.patch, HIVE-11051.02.patch, 
> problem_table_joins.tar.gz
>
>
> I tried to apply: HIVE-10729 which did not solve the issue.
> The following exception is thrown on a Tez MapJoin with Hive 1.2.0 and Tez 
> 0.5.4/0.5.3
> {code}
> Status: Running (Executing on YARN cluster with App id 
> application_1434641270368_1038)
> 
> VERTICES  STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  
> KILLED
> 
> Map 1 ..   SUCCEEDED  3  300   0  
>  0
> Map 2 ... FAILED  3  102   7  
>  0
> 
> VERTICES: 01/02  [=>>-] 66%   ELAPSED TIME: 7.39 s
>  
> 
> Status: Failed
> Vertex failed, vertexName=Map 2, vertexId=vertex_1434641270368_1038_2_01, 
> diagnostics=[Task failed, taskId=task_1434641270368_1038_2_01_02, 
> diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running 
> task:java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
> {"cnctevn_id":"002245282386","svcrqst_id":"003627217285","svcrqst_crt_dts":"2015-04-23
>  11:54:39.238357","subject_seq_no":1,"plan_component":"HMOM1 
> ","cust_segment":"RM 
> ","cnctyp_cd":"001","cnctmd_cd":"D02","cnctevs_cd":"007","svcrtyp_cd":"335","svrstyp_cd":"088","cmpltyp_cd":"
>  ","catsrsn_cd":"","apealvl_cd":" 
> ","cnstnty_cd":"001","svcrqst_asrqst_ind":"Y","svcrqst_rtnorig_in":"N","svcrqst_vwasof_dt":"null","sum_reason_cd":"98","sum_reason":"Exclude","crsr_master_claim_index":null,"svcrqct_cds":["
>"],"svcrqst_lupdt":"2015-04-23 
> 22:14:01.288132","crsr_lupdt":null,"cntevsds_lupdt":"2015-04-23 
> 11:54:40.740061","ignore_me":1,"notes":null}
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:171)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
> {"cnctevn_id":"002245282386","svcrqst_id":"003627217285","svcrqst_crt_dts":"2015-04-23
>  11:54:39.238357","subject_seq_no":1,"plan_component":"HMOM1 
> ","cust_segment":"RM 
> ","cnctyp_cd":"001","cnctmd_cd":"D02","cnctevs_cd":"007","svcrtyp_cd":"335","svrstyp_cd":"088","cmpltyp_cd":"
>  ","catsrsn_cd":"","apealvl_cd":" 
> ","cnstnty_cd":"001","svcrqst_asrqst_ind":"Y","svcrqst_rtnorig_in":"N","svcrqst_vwasof_dt":"null","sum_reason_cd":"98","sum_reason":"Exclude","crsr_master_claim_index":null,"svcrqct_cds":["
>"],"svcrqst_lupdt":"2015-04-23 
> 

[jira] [Commented] (HIVE-12551) Fix several kryo exceptions in branch-1

2016-01-17 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15104068#comment-15104068
 ] 

Feng Yuan commented on HIVE-12551:
--

can you look this please? [~xuefuz],[~serganch],[~pchag]

> Fix several kryo exceptions in branch-1
> ---
>
> Key: HIVE-12551
> URL: https://issues.apache.org/jira/browse/HIVE-12551
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: serialization
> Fix For: 1.3.0
>
> Attachments: HIVE-12551.1.patch, test case.zip
>
>
> HIVE-11519, HIVE-12174 and the following exception are all caused by 
> unregistered classes or serializers. HIVE-12175 should have fixed these 
> issues for master branch.
> {code}
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.lang.NullPointerException
> Serialization trace:
> chidren (org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc)
> expr (org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor)
> childExpressions 
> (org.apache.hadoop.hive.ql.exec.vector.expressions.gen.FilterStringColumnBetween)
> conditionEvaluator 
> (org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator)
> childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:367)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:276)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:1087)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:976)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:990)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:426)
>   ... 27 more
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays$ArrayList.size(Arrays.java:3818)
>   at 

[jira] [Commented] (HIVE-12551) Fix several kryo exceptions in branch-1

2015-12-27 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072479#comment-15072479
 ] 

Feng Yuan commented on HIVE-12551:
--

hi [~prasanth_j],are you following this issue these days?

> Fix several kryo exceptions in branch-1
> ---
>
> Key: HIVE-12551
> URL: https://issues.apache.org/jira/browse/HIVE-12551
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: serialization
> Fix For: 1.3.0
>
> Attachments: HIVE-12551.1.patch, test case.zip
>
>
> HIVE-11519, HIVE-12174 and the following exception are all caused by 
> unregistered classes or serializers. HIVE-12175 should have fixed these 
> issues for master branch.
> {code}
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.lang.NullPointerException
> Serialization trace:
> chidren (org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc)
> expr (org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor)
> childExpressions 
> (org.apache.hadoop.hive.ql.exec.vector.expressions.gen.FilterStringColumnBetween)
> conditionEvaluator 
> (org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator)
> childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:367)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:276)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:1087)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:976)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:990)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:426)
>   ... 27 more
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays$ArrayList.size(Arrays.java:3818)
>   at 

[jira] [Updated] (HIVE-12551) Fix several kryo exceptions in branch-1

2015-12-11 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-12551:
-
Attachment: test case.zip

> Fix several kryo exceptions in branch-1
> ---
>
> Key: HIVE-12551
> URL: https://issues.apache.org/jira/browse/HIVE-12551
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: serialization
> Fix For: 1.3.0
>
> Attachments: HIVE-12551.1.patch, test case.zip
>
>
> HIVE-11519, HIVE-12174 and the following exception are all caused by 
> unregistered classes or serializers. HIVE-12175 should have fixed these 
> issues for master branch.
> {code}
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.lang.NullPointerException
> Serialization trace:
> chidren (org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc)
> expr (org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor)
> childExpressions 
> (org.apache.hadoop.hive.ql.exec.vector.expressions.gen.FilterStringColumnBetween)
> conditionEvaluator 
> (org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator)
> childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:367)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:276)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:1087)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:976)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:990)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:426)
>   ... 27 more
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays$ArrayList.size(Arrays.java:3818)
>   at java.util.AbstractList.add(AbstractList.java:108)
>   at 
> 

[jira] [Commented] (HIVE-12551) Fix several kryo exceptions in branch-1

2015-12-11 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15052521#comment-15052521
 ] 

Feng Yuan commented on HIVE-12551:
--

hi [~prasanth_j] i upload the test case,please look at it when you have 
time,thanks!

> Fix several kryo exceptions in branch-1
> ---
>
> Key: HIVE-12551
> URL: https://issues.apache.org/jira/browse/HIVE-12551
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: serialization
> Fix For: 1.3.0
>
> Attachments: HIVE-12551.1.patch, test case.zip
>
>
> HIVE-11519, HIVE-12174 and the following exception are all caused by 
> unregistered classes or serializers. HIVE-12175 should have fixed these 
> issues for master branch.
> {code}
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.lang.NullPointerException
> Serialization trace:
> chidren (org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc)
> expr (org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor)
> childExpressions 
> (org.apache.hadoop.hive.ql.exec.vector.expressions.gen.FilterStringColumnBetween)
> conditionEvaluator 
> (org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator)
> childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:367)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:276)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:1087)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:976)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:990)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:426)
>   ... 27 more
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays$ArrayList.size(Arrays.java:3818)
>   at 

[jira] [Commented] (HIVE-12551) Fix several kryo exceptions in branch-1

2015-12-08 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047815#comment-15047815
 ] 

Feng Yuan commented on HIVE-12551:
--

hi [~prasanth_j],i try to get down the newest src from branch-1 in github and 
try my hql,but still meet the issue:
which: no hbase in 
(/opt/java/:/opt/hadoop/hadoop-2.6.0/bin:/opt/Python-2.7/bin:/opt/java/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/Ice-3.3/bin:/opt/dell/srvadmin/bin:/opt/hadoop/hadoop-2.6.0/bin:/opt/hadoop/hadoop-2.6.0/sbin:/opt/hadoop/bin)

Logging initialized using configuration in 
jar:file:/opt/hadoop/yuanfeng/apache-hive-1.3.0-SNAPSHOT-bin/lib/hive-common-1.3.0-SNAPSHOT.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hadoop/hadoop-2.6.0/lib/tez-0.5.2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Query ID = hadoop_20151208181528_d346dc26-18fb-4267-93e4-34c79f6759bd
Total jobs = 8
Stage-6 is selected by condition resolver.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hadoop/hadoop-2.6.0/lib/tez-0.5.2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Execution log at: 
/tmp/hadoop/hadoop_20151208181528_d346dc26-18fb-4267-93e4-34c79f6759bd.log
2015-12-08 18:15:44 Starting to launch local task to process map join;  
maximum memory = 1908932608
2015-12-08 18:15:46 Dump the side-table for tag: 1 with group count: 1 into 
file: 
file:/tmp/hadoop/71586bb3-ea53-4edf-94d0-cc17094c60e9/hive_2015-12-08_18-15-28_261_2150907413884186817-1/-local-10015/HashTable-Stage-2/MapJoin-mapfile21--.hashtable
2015-12-08 18:15:46 Uploaded 1 File to: 
file:/tmp/hadoop/71586bb3-ea53-4edf-94d0-cc17094c60e9/hive_2015-12-08_18-15-28_261_2150907413884186817-1/-local-10015/HashTable-Stage-2/MapJoin-mapfile21--.hashtable
 (293 bytes)
2015-12-08 18:15:46 End of local task; Time Taken: 1.632 sec.
Execution completed successfully
MapredLocal task succeeded
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hadoop/hadoop-2.6.0/lib/tez-0.5.2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Execution log at: 
/tmp/hadoop/hadoop_20151208181528_d346dc26-18fb-4267-93e4-34c79f6759bd.log
2015-12-08 18:15:52 Starting to launch local task to process map join;  
maximum memory = 1908932608
2015-12-08 18:15:53 Dump the side-table for tag: 1 with group count: 1 into 
file: 
file:/tmp/hadoop/71586bb3-ea53-4edf-94d0-cc17094c60e9/hive_2015-12-08_18-15-28_261_2150907413884186817-1/-local-10019/HashTable-Stage-12/MapJoin-mapfile61--.hashtable
2015-12-08 18:15:53 Uploaded 1 File to: 
file:/tmp/hadoop/71586bb3-ea53-4edf-94d0-cc17094c60e9/hive_2015-12-08_18-15-28_261_2150907413884186817-1/-local-10019/HashTable-Stage-12/MapJoin-mapfile61--.hashtable
 (293 bytes)
2015-12-08 18:15:53 End of local task; Time Taken: 1.333 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 8
Number of reduce tasks not specified. Estimated from input data size: 24
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1449545133527_0033, Tracking URL = 
http://bjhlg-24p2-113-hadoop03:8088/proxy/application_1449545133527_0033/
Kill Command = /opt/hadoop/hadoop-2.6.0/bin/hadoop job  -kill 
job_1449545133527_0033
Hadoop job information for Stage-6: number of mappers: 8; number of reducers: 24
2015-12-08 18:16:14,666 Stage-6 map = 0%,  reduce = 0%
2015-12-08 18:16:28,249 Stage-6 map = 13%,  reduce = 0%, Cumulative CPU 19.93 
sec
2015-12-08 18:16:32,476 Stage-6 map = 15%,  reduce = 0%, Cumulative CPU 33.7 sec
2015-12-08 18:16:34,591 Stage-6 map = 19%,  reduce = 0%, Cumulative CPU 33.7 sec
2015-12-08 18:16:36,726 

[jira] [Commented] (HIVE-12530) Merge join in mutiple subsquence join and a mapjoin in it in mr model

2015-11-29 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031251#comment-15031251
 ] 

Feng Yuan commented on HIVE-12530:
--

[~vikram.dixit],hi could u please look at this issue?
its similar to HIVE-9832,i think,thank you!

> Merge join in mutiple subsquence join and a mapjoin in it in mr model
> -
>
> Key: HIVE-12530
> URL: https://issues.apache.org/jira/browse/HIVE-12530
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.1
>Reporter: Feng Yuan
> Fix For: 2.00
>
>
> sample hql:
> select  A.state_date, 
>A.customer, 
>A.channel_2,
>A.id,
>A.pid,
>A.type,
>A.pv,
>A.uv,
>A.visits,
>if(C.stay_visits is null,0,C.stay_visits) as stay_visits,
>A.stay_time,
>if(B.bounce is null,0,B.bounce) as bounce
>  from
>  (select a.state_date, 
> a.customer, 
> b.url as channel_2,
> b.id,
> b.pid,
> b.type,
> count(1) as pv,
> count(distinct a.gid) uv,
> count(distinct a.session_id) as visits,
> sum(a.stay_time) as stay_time
>from   
>( select state_date, 
>customer, 
>gid,
>session_id,
>ep,
>stay_time
> from bdi_fact.mid_pageview_dt0
> where l_date ='$v_date'
>   )a
>   join
>   (select l_date as state_date ,
>   url,
>   id,
>   pid,
>   type,
>   cid
>from bdi_fact.frequency_channel
>where l_date ='$v_date'
>and type ='2'
>and dr='0'
>   )b
>on  a.customer=b.cid  
>where a.ep  rlike b.url
>group by a.state_date, a.customer, b.url,b.id,b.pid,b.type
>)A
>   
> left outer join
>(   select 
>c.state_date ,
>c.customer ,
>d.url as channel_2,
>d.id,
>sum(pagedepth) as bounce
> from
>   ( select 
>   t1.state_date ,
>   t1.customer ,
>   t1.session_id,
>   t1.ep,
>   t2.pagedepth
> from   
>  ( select 
>  state_date ,
>  customer ,
>  session_id,
>  exit_url as ep
>   from ods.mid_session_enter_exit_dt0
>   where l_date ='$v_date'
>   )t1
>  join
>   ( select 
> state_date ,
> customer ,
> session_id,
> pagedepth
> from ods.mid_session_action_dt0
> where l_date ='$v_date'
> and  pagedepth='1'
>   )t2
>  on t1.customer=t2.customer
>  and t1.session_id=t2.session_id
>)c
>join
>(select *
>from bdi_fact.frequency_channel
>where l_date ='$v_date'
>and type ='2'
>and dr='0'
>)d
>on c.customer=d.cid
>where c.ep  rlike d.url
>group by  c.state_date,c.customer,d.url,d.id
>  )B
>  on 
>  A.customer=B.customer
>  and A.channel_2=B.channel_2 
>  and A.id=B.id
>   left outer join
>  ( 
>  select e.state_date, 
> e.customer, 
> f.url as channel_2,
> f.id,
> f.pid,
> f.type,
> count(distinct e.session_id) as stay_visits
>from   
>( select state_date, 
>customer, 
>gid,
>session_id,
>ep,
>  

[jira] [Commented] (HIVE-12175) Upgrade Kryo version to 3.0.x

2015-11-26 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028567#comment-15028567
 ] 

Feng Yuan commented on HIVE-12175:
--

hi [~prasanth_j],how is the fix this issue in hive-1.2.1 going?
please forget i mention this.but i really dont know how to get hive-1.2.1 work 
in our company production.


> Upgrade Kryo version to 3.0.x
> -
>
> Key: HIVE-12175
> URL: https://issues.apache.org/jira/browse/HIVE-12175
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: TODOC2.0
> Fix For: 2.0.0
>
> Attachments: HIVE-12175.1.patch, HIVE-12175.2.patch, 
> HIVE-12175.3.patch, HIVE-12175.3.patch, HIVE-12175.4.patch, 
> HIVE-12175.5.patch, HIVE-12175.6.patch
>
>
> Current version of kryo (2.22) has some issue (refer exception below and in 
> HIVE-12174) with serializing ArrayLists generated using Arrays.asList(). We 
> need to either replace all occurrences of  Arrays.asList() or change the 
> current StdInstantiatorStrategy. This issue is fixed in later versions and 
> kryo community recommends using DefaultInstantiatorStrategy with fallback to 
> StdInstantiatorStrategy. More discussion about this issue is here 
> https://github.com/EsotericSoftware/kryo/issues/216. Alternatively, custom 
> serilization/deserilization class can be provided for Arrays.asList.
> Also, kryo 3.0 introduced unsafe based serialization which claims to have 
> much better performance for certain types of serialization. 
> Exception:
> {code}
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays$ArrayList.size(Arrays.java:2847)
>   at java.util.AbstractList.add(AbstractList.java:108)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   ... 57 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12175) Upgrade Kryo version to 3.0.x

2015-11-25 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15026431#comment-15026431
 ] 

Feng Yuan commented on HIVE-12175:
--

sorry,i use this patch in master and:
Caused by: java.lang.RuntimeException: 
org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: 
java.util.Properties
Serialization trace:
keyDesc (org.apache.hadoop.hive.ql.plan.ReduceWork)
at 
org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:423)
at 
org.apache.hadoop.hive.ql.exec.Utilities.getReduceWork(Utilities.java:294)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:117)
... 14 more
Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
find class: java.util.Properties
Serialization trace:
keyDesc (org.apache.hadoop.hive.ql.plan.ReduceWork)
at 
org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
at 
org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
at 
org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:1025)
at 
org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:933)
at 
org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:947)
at 
org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:403)
... 16 more
Caused by: java.lang.ClassNotFoundException: java.util.Properties
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at 
org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
... 25 more

> Upgrade Kryo version to 3.0.x
> -
>
> Key: HIVE-12175
> URL: https://issues.apache.org/jira/browse/HIVE-12175
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: TODOC2.0
> Fix For: 2.0.0
>
> Attachments: HIVE-12175.1.patch, HIVE-12175.2.patch, 
> HIVE-12175.3.patch, HIVE-12175.3.patch, HIVE-12175.4.patch, 
> HIVE-12175.5.patch, HIVE-12175.6.patch
>
>
> Current version of kryo (2.22) has some issue (refer exception below and in 
> HIVE-12174) with serializing ArrayLists generated using Arrays.asList(). We 
> need to either replace all occurrences of  Arrays.asList() or change the 
> current StdInstantiatorStrategy. This issue is fixed in later versions and 
> kryo community recommends using DefaultInstantiatorStrategy with fallback to 
> StdInstantiatorStrategy. More discussion about this issue is here 
> https://github.com/EsotericSoftware/kryo/issues/216. Alternatively, custom 
> serilization/deserilization class can be provided for Arrays.asList.
> Also, kryo 3.0 introduced unsafe based serialization which claims to have 
> much better performance for certain types of serialization. 
> Exception:
> {code}
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays$ArrayList.size(Arrays.java:2847)
>   at java.util.AbstractList.add(AbstractList.java:108)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   ... 57 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12175) Upgrade Kryo version to 3.0.x

2015-11-23 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023580#comment-15023580
 ] 

Feng Yuan commented on HIVE-12175:
--

hi [~prasanth_j],could this apply on 1.2.1?
i apply it in our hive-1.2.1,but whatever i try,the appended 
file:StandardConstantStructObjectInspector.java is said not find.though i put 
it in the correct package,but mvn still complain can not find this file?

> Upgrade Kryo version to 3.0.x
> -
>
> Key: HIVE-12175
> URL: https://issues.apache.org/jira/browse/HIVE-12175
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12175.1.patch, HIVE-12175.2.patch, 
> HIVE-12175.3.patch, HIVE-12175.3.patch, HIVE-12175.4.patch, 
> HIVE-12175.5.patch, HIVE-12175.6.patch
>
>
> Current version of kryo (2.22) has some issue (refer exception below and in 
> HIVE-12174) with serializing ArrayLists generated using Arrays.asList(). We 
> need to either replace all occurrences of  Arrays.asList() or change the 
> current StdInstantiatorStrategy. This issue is fixed in later versions and 
> kryo community recommends using DefaultInstantiatorStrategy with fallback to 
> StdInstantiatorStrategy. More discussion about this issue is here 
> https://github.com/EsotericSoftware/kryo/issues/216. Alternatively, custom 
> serilization/deserilization class can be provided for Arrays.asList.
> Also, kryo 3.0 introduced unsafe based serialization which claims to have 
> much better performance for certain types of serialization. 
> Exception:
> {code}
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays$ArrayList.size(Arrays.java:2847)
>   at java.util.AbstractList.add(AbstractList.java:108)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   ... 57 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12175) Upgrade Kryo version to 3.0.x

2015-11-23 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023579#comment-15023579
 ] 

Feng Yuan commented on HIVE-12175:
--

hi [~prasanth_j],could this apply on 1.2.1?
i apply it in our hive-1.2.1,but whatever i try,the appended 
file:StandardConstantStructObjectInspector.java is said not find.though i put 
it in the correct package,but mvn still complain can not find this file?

> Upgrade Kryo version to 3.0.x
> -
>
> Key: HIVE-12175
> URL: https://issues.apache.org/jira/browse/HIVE-12175
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12175.1.patch, HIVE-12175.2.patch, 
> HIVE-12175.3.patch, HIVE-12175.3.patch, HIVE-12175.4.patch, 
> HIVE-12175.5.patch, HIVE-12175.6.patch
>
>
> Current version of kryo (2.22) has some issue (refer exception below and in 
> HIVE-12174) with serializing ArrayLists generated using Arrays.asList(). We 
> need to either replace all occurrences of  Arrays.asList() or change the 
> current StdInstantiatorStrategy. This issue is fixed in later versions and 
> kryo community recommends using DefaultInstantiatorStrategy with fallback to 
> StdInstantiatorStrategy. More discussion about this issue is here 
> https://github.com/EsotericSoftware/kryo/issues/216. Alternatively, custom 
> serilization/deserilization class can be provided for Arrays.asList.
> Also, kryo 3.0 introduced unsafe based serialization which claims to have 
> much better performance for certain types of serialization. 
> Exception:
> {code}
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays$ArrayList.size(Arrays.java:2847)
>   at java.util.AbstractList.add(AbstractList.java:108)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   ... 57 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-12454) in tez model when i use add jar xxx query will fail

2015-11-19 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan resolved HIVE-12454.
--
Resolution: Not A Problem

> in tez model when i use add jar xxx query will fail
> ---
>
> Key: HIVE-12454
> URL: https://issues.apache.org/jira/browse/HIVE-12454
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
> Fix For: 1.3.0
>
>
> whatever hql only if i use add jar udf.jar hql will throws out these:
> Status: Running (Executing on YARN cluster with App id 
> application_1447448264041_0723)
> 
> VERTICES  STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  
> KILLED
> 
> Map 1 FAILED -1  00   -1   0  
>  0
> Map 2 FAILED -1  00   -1   0  
>  0
> 
> VERTICES: 00/02  [>>--] 0%ELAPSED TIME: 0.27 s
>  
> 
> Status: Failed
> Vertex failed, vertexName=Map 2, vertexId=vertex_1447448264041_0723_1_00, 
> diagnostics=[Vertex vertex_1447448264041_0723_1_00 [Map 2] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: mid_bdi_customer_online initializer 
> failed, vertex=vertex_1447448264041_0723_1_00 [Map 2], 
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hive.shims.HadoopShims.getMergedCredentials(Lorg/apache/hadoop/mapred/JobConf;)V
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:104)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ]
> Vertex failed, vertexName=Map 1, vertexId=vertex_1447448264041_0723_1_01, 
> diagnostics=[Vertex vertex_1447448264041_0723_1_01 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: raw_kafka_event_pageview_dt0 
> initializer failed, vertex=vertex_1447448264041_0723_1_01 [Map 1], 
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hive.shims.HadoopShims.getMergedCredentials(Lorg/apache/hadoop/mapred/JobConf;)V
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:104)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ]
> DAG failed due to vertex failure. failedVertices:2 killedVertices:0
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask
> sometime is NoSuchMethodError other package such as  
> 

[jira] [Commented] (HIVE-12175) Upgrade Kryo version to 3.0.x

2015-10-26 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973990#comment-14973990
 ] 

Feng Yuan commented on HIVE-12175:
--

after patch, still met issue HIVE-11519

> Upgrade Kryo version to 3.0.x
> -
>
> Key: HIVE-12175
> URL: https://issues.apache.org/jira/browse/HIVE-12175
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12175.1.patch
>
>
> Current version of kryo (2.22) has some issue (refer exception below and in 
> HIVE-12174) with serializing ArrayLists generated using Arrays.asList(). We 
> need to either replace all occurrences of  Arrays.asList() or change the 
> current StdInstantiatorStrategy. This issue is fixed in later versions and 
> kryo community recommends using DefaultInstantiatorStrategy with fallback to 
> StdInstantiatorStrategy. More discussion about this issue is here 
> https://github.com/EsotericSoftware/kryo/issues/216. Alternatively, custom 
> serilization/deserilization class can be provided for Arrays.asList.
> Also, kryo 3.0 introduced unsafe based serialization which claims to have 
> much better performance for certain types of serialization. 
> Exception:
> {code}
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays$ArrayList.size(Arrays.java:2847)
>   at java.util.AbstractList.add(AbstractList.java:108)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   ... 57 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12088) a simple insert hql throws out NoClassFoundException of MetaException

2015-10-21 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14966354#comment-14966354
 ] 

Feng Yuan commented on HIVE-12088:
--

[~gopalv] [~xuefuz]please ignore this,it is a mistake. sorry!

> a simple insert hql throws out NoClassFoundException of MetaException
> -
>
> Key: HIVE-12088
> URL: https://issues.apache.org/jira/browse/HIVE-12088
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Feng Yuan
> Fix For: 1.2.2
>
> Attachments: hive.log
>
>
> example:
> from portrait.rec_feature_feedback a insert overwrite table portrait.test1 
> select iid, feedback_15day, feedback_7day, feedback_5day, feedback_3day, 
> feedback_1day where l_date = '2015-09-09' and bid in 
> ('949722CF_12F7_523A_EE21_E3D591B7E755');
> log shows:
> Query ID = hadoop_20151012153841_120bee59-56a7-4e53-9c45-76f97c0f50ad
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1441881651073_95266, Tracking URL = 
> http://bjlg-44p12-rm01:8088/proxy/application_1441881651073_95266/
> Kill Command = /opt/hadoop/hadoop/bin/hadoop job  -kill 
> job_1441881651073_95266
> Hadoop job information for Stage-1: number of mappers: 21; number of 
> reducers: 0
> 2015-10-12 15:39:29,930 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:39,597 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:40,658 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:53,479 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:54,535 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:55,588 Stage-1 map = 10%,  reduce = 0%
> 2015-10-12 15:39:56,626 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:57,687 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:40:06,096 Stage-1 map = 100%,  reduce = 0%
> Ended Job = job_1441881651073_95266 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1441881651073_95266_m_00 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_16 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_11 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_18 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_02 (and more) from job 
> job_1441881651073_95266
> Task with the most failures(4): 
> -
> Task ID:
>   task_1441881651073_95266_m_09
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1441881651073_95266=task_1441881651073_95266_m_09
> -
> Diagnostic Messages for this Task:
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.metastore.api.MetaException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
>   at java.lang.Class.privateGetPublicMethods(Class.java:2690)
>   at java.lang.Class.getMethods(Class.java:1467)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:54)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:49)
>   at com.sun.beans.util.Cache.get(Cache.java:127)
>   at com.sun.beans.finder.MethodFinder.findMethod(MethodFinder.java:81)
>   at java.beans.Statement.getMethod(Statement.java:357)
>   at java.beans.Statement.invokeInternal(Statement.java:261)
>   at java.beans.Statement.access$000(Statement.java:58)
>   at java.beans.Statement$2.run(Statement.java:185)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.beans.Statement.invoke(Statement.java:182)
>   at java.beans.Expression.getValue(Expression.java:153)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:166)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> com.sun.beans.decoder.ElementHandler.getContextBean(ElementHandler.java:113)
>   at 
> com.sun.beans.decoder.NewElementHandler.getContextBean(NewElementHandler.java:109)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:146)
>   at 
> 

[jira] [Resolved] (HIVE-12088) a simple insert hql throws out NoClassFoundException of MetaException

2015-10-21 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan resolved HIVE-12088.
--
Resolution: Not A Problem

> a simple insert hql throws out NoClassFoundException of MetaException
> -
>
> Key: HIVE-12088
> URL: https://issues.apache.org/jira/browse/HIVE-12088
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Feng Yuan
> Fix For: 1.2.2
>
> Attachments: hive.log
>
>
> example:
> from portrait.rec_feature_feedback a insert overwrite table portrait.test1 
> select iid, feedback_15day, feedback_7day, feedback_5day, feedback_3day, 
> feedback_1day where l_date = '2015-09-09' and bid in 
> ('949722CF_12F7_523A_EE21_E3D591B7E755');
> log shows:
> Query ID = hadoop_20151012153841_120bee59-56a7-4e53-9c45-76f97c0f50ad
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1441881651073_95266, Tracking URL = 
> http://bjlg-44p12-rm01:8088/proxy/application_1441881651073_95266/
> Kill Command = /opt/hadoop/hadoop/bin/hadoop job  -kill 
> job_1441881651073_95266
> Hadoop job information for Stage-1: number of mappers: 21; number of 
> reducers: 0
> 2015-10-12 15:39:29,930 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:39,597 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:40,658 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:53,479 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:54,535 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:55,588 Stage-1 map = 10%,  reduce = 0%
> 2015-10-12 15:39:56,626 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:57,687 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:40:06,096 Stage-1 map = 100%,  reduce = 0%
> Ended Job = job_1441881651073_95266 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1441881651073_95266_m_00 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_16 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_11 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_18 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_02 (and more) from job 
> job_1441881651073_95266
> Task with the most failures(4): 
> -
> Task ID:
>   task_1441881651073_95266_m_09
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1441881651073_95266=task_1441881651073_95266_m_09
> -
> Diagnostic Messages for this Task:
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.metastore.api.MetaException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
>   at java.lang.Class.privateGetPublicMethods(Class.java:2690)
>   at java.lang.Class.getMethods(Class.java:1467)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:54)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:49)
>   at com.sun.beans.util.Cache.get(Cache.java:127)
>   at com.sun.beans.finder.MethodFinder.findMethod(MethodFinder.java:81)
>   at java.beans.Statement.getMethod(Statement.java:357)
>   at java.beans.Statement.invokeInternal(Statement.java:261)
>   at java.beans.Statement.access$000(Statement.java:58)
>   at java.beans.Statement$2.run(Statement.java:185)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.beans.Statement.invoke(Statement.java:182)
>   at java.beans.Expression.getValue(Expression.java:153)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:166)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> com.sun.beans.decoder.ElementHandler.getContextBean(ElementHandler.java:113)
>   at 
> com.sun.beans.decoder.NewElementHandler.getContextBean(NewElementHandler.java:109)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:146)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> 

[jira] [Updated] (HIVE-11840) when multi insert the inputformat becomes OneNullRowInputFormat

2015-10-21 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11840:
-
Affects Version/s: 1.2.1

> when multi insert the inputformat becomes OneNullRowInputFormat
> ---
>
> Key: HIVE-11840
> URL: https://issues.apache.org/jira/browse/HIVE-11840
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0, 1.2.1
>Reporter: Feng Yuan
>Priority: Blocker
> Fix For: 0.14.1
>
> Attachments: multi insert, single__insert
>
>
> example:
> from portrait.rec_feature_feedback a 
> insert overwrite table portrait.test1 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('949722CF_12F7_523A_EE21_E3D591B7E755') 
> insert overwrite table portrait.test2 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('test') 
> insert overwrite table portrait.test3 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('F7734668_CC49_8C4F_24C5_EA8B6728E394')
> when single insert it works.but multi insert when i select * from test1:
> NULL NULL NULL NULL NULL NULL.
> i see "explain extended"
> Path -> Alias:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} [a]
> -mr-10007portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Czgc_pc, bid=949722CF_12F7_523A_EE21_E3D591B7E755} [a]
>   Path -> Partition:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} 
>   Partition
> base file name: bid=F7734668_CC49_8C4F_24C5_EA8B6728E394
> input format: org.apache.hadoop.hive.ql.io.OneNullRowInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid F7734668_CC49_8C4F_24C5_EA8B6728E394
>   cid Cyiyaowang
>   l_date 2015-09-09
> but when single insert:
> Path -> Alias:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  [a]
>   Path -> Partition:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  
>   Partition
> base file name: bid=949722CF_12F7_523A_EE21_E3D591B7E755
> input format: org.apache.hadoop.mapred.TextInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid 949722CF_12F7_523A_EE21_E3D591B7E755
>   cid Czgc_pc
>   l_date 2015-09-09



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11840) when multi insert the inputformat becomes OneNullRowInputFormat

2015-10-21 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14966367#comment-14966367
 ] 

Feng Yuan commented on HIVE-11840:
--

the same issue happen in 1.2.1!

> when multi insert the inputformat becomes OneNullRowInputFormat
> ---
>
> Key: HIVE-11840
> URL: https://issues.apache.org/jira/browse/HIVE-11840
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0, 1.2.1
>Reporter: Feng Yuan
>Priority: Blocker
> Fix For: 0.14.1
>
> Attachments: multi insert, single__insert
>
>
> example:
> from portrait.rec_feature_feedback a 
> insert overwrite table portrait.test1 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('949722CF_12F7_523A_EE21_E3D591B7E755') 
> insert overwrite table portrait.test2 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('test') 
> insert overwrite table portrait.test3 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('F7734668_CC49_8C4F_24C5_EA8B6728E394')
> when single insert it works.but multi insert when i select * from test1:
> NULL NULL NULL NULL NULL NULL.
> i see "explain extended"
> Path -> Alias:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} [a]
> -mr-10007portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Czgc_pc, bid=949722CF_12F7_523A_EE21_E3D591B7E755} [a]
>   Path -> Partition:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} 
>   Partition
> base file name: bid=F7734668_CC49_8C4F_24C5_EA8B6728E394
> input format: org.apache.hadoop.hive.ql.io.OneNullRowInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid F7734668_CC49_8C4F_24C5_EA8B6728E394
>   cid Cyiyaowang
>   l_date 2015-09-09
> but when single insert:
> Path -> Alias:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  [a]
>   Path -> Partition:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  
>   Partition
> base file name: bid=949722CF_12F7_523A_EE21_E3D591B7E755
> input format: org.apache.hadoop.mapred.TextInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid 949722CF_12F7_523A_EE21_E3D591B7E755
>   cid Czgc_pc
>   l_date 2015-09-09



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9543) MetaException(message:Metastore contains multiple versions)

2015-10-20 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964925#comment-14964925
 ] 

Feng Yuan commented on HIVE-9543:
-

all above 2 errors are due to query.execute() function didnt throw exceptions 
when network is wrong...i think
if we can add a exception in this section?

> MetaException(message:Metastore contains multiple versions)
> ---
>
> Key: HIVE-9543
> URL: https://issues.apache.org/jira/browse/HIVE-9543
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Junyong Li
>
> When i run bin/hive command, i got the following exception:
> {noformat}
> Logging initialized using configuration in 
> jar:file:/home/hadoop/apache-hive-0.13.1-bin/lib/hive-common-0.13.1.jar!/hive-log4j.properties
> Exception in thread "main" java.lang.RuntimeException: 
> java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:62)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
> ... 7 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
> ... 12 more
> Caused by: MetaException(message:Metastore contains multiple versions)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:6368)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:6330)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6289)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6277)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
> at com.sun.proxy.$Proxy9.verifySchema(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:476)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:356)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:54)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:171)
> ... 17 more
> {noformat}
> And i have found two record in metastore table VERSION. after reading source 
> code, i found following code maybe cause the problem: 
> In the org.apache.hadoop.hive.metastore.ObjectStore.java:6289:
> {noformat}
> String schemaVer = 

[jira] [Commented] (HIVE-12088) a simple insert hql throws out NoClassFoundException of MetaException

2015-10-20 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14966202#comment-14966202
 ] 

Feng Yuan commented on HIVE-12088:
--

[~gopalv]

> a simple insert hql throws out NoClassFoundException of MetaException
> -
>
> Key: HIVE-12088
> URL: https://issues.apache.org/jira/browse/HIVE-12088
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Feng Yuan
> Fix For: 1.2.2
>
> Attachments: hive.log
>
>
> example:
> from portrait.rec_feature_feedback a insert overwrite table portrait.test1 
> select iid, feedback_15day, feedback_7day, feedback_5day, feedback_3day, 
> feedback_1day where l_date = '2015-09-09' and bid in 
> ('949722CF_12F7_523A_EE21_E3D591B7E755');
> log shows:
> Query ID = hadoop_20151012153841_120bee59-56a7-4e53-9c45-76f97c0f50ad
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1441881651073_95266, Tracking URL = 
> http://bjlg-44p12-rm01:8088/proxy/application_1441881651073_95266/
> Kill Command = /opt/hadoop/hadoop/bin/hadoop job  -kill 
> job_1441881651073_95266
> Hadoop job information for Stage-1: number of mappers: 21; number of 
> reducers: 0
> 2015-10-12 15:39:29,930 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:39,597 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:40,658 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:53,479 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:54,535 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:55,588 Stage-1 map = 10%,  reduce = 0%
> 2015-10-12 15:39:56,626 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:57,687 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:40:06,096 Stage-1 map = 100%,  reduce = 0%
> Ended Job = job_1441881651073_95266 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1441881651073_95266_m_00 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_16 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_11 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_18 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_02 (and more) from job 
> job_1441881651073_95266
> Task with the most failures(4): 
> -
> Task ID:
>   task_1441881651073_95266_m_09
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1441881651073_95266=task_1441881651073_95266_m_09
> -
> Diagnostic Messages for this Task:
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.metastore.api.MetaException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
>   at java.lang.Class.privateGetPublicMethods(Class.java:2690)
>   at java.lang.Class.getMethods(Class.java:1467)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:54)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:49)
>   at com.sun.beans.util.Cache.get(Cache.java:127)
>   at com.sun.beans.finder.MethodFinder.findMethod(MethodFinder.java:81)
>   at java.beans.Statement.getMethod(Statement.java:357)
>   at java.beans.Statement.invokeInternal(Statement.java:261)
>   at java.beans.Statement.access$000(Statement.java:58)
>   at java.beans.Statement$2.run(Statement.java:185)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.beans.Statement.invoke(Statement.java:182)
>   at java.beans.Expression.getValue(Expression.java:153)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:166)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> com.sun.beans.decoder.ElementHandler.getContextBean(ElementHandler.java:113)
>   at 
> com.sun.beans.decoder.NewElementHandler.getContextBean(NewElementHandler.java:109)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:146)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>  

[jira] [Reopened] (HIVE-12088) a simple insert hql throws out NoClassFoundException of MetaException

2015-10-19 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan reopened HIVE-12088:
--

> a simple insert hql throws out NoClassFoundException of MetaException
> -
>
> Key: HIVE-12088
> URL: https://issues.apache.org/jira/browse/HIVE-12088
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Feng Yuan
> Fix For: 1.2.2
>
> Attachments: hive.log
>
>
> example:
> from portrait.rec_feature_feedback a insert overwrite table portrait.test1 
> select iid, feedback_15day, feedback_7day, feedback_5day, feedback_3day, 
> feedback_1day where l_date = '2015-09-09' and bid in 
> ('949722CF_12F7_523A_EE21_E3D591B7E755');
> log shows:
> Query ID = hadoop_20151012153841_120bee59-56a7-4e53-9c45-76f97c0f50ad
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1441881651073_95266, Tracking URL = 
> http://bjlg-44p12-rm01:8088/proxy/application_1441881651073_95266/
> Kill Command = /opt/hadoop/hadoop/bin/hadoop job  -kill 
> job_1441881651073_95266
> Hadoop job information for Stage-1: number of mappers: 21; number of 
> reducers: 0
> 2015-10-12 15:39:29,930 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:39,597 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:40,658 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:53,479 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:54,535 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:55,588 Stage-1 map = 10%,  reduce = 0%
> 2015-10-12 15:39:56,626 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:57,687 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:40:06,096 Stage-1 map = 100%,  reduce = 0%
> Ended Job = job_1441881651073_95266 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1441881651073_95266_m_00 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_16 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_11 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_18 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_02 (and more) from job 
> job_1441881651073_95266
> Task with the most failures(4): 
> -
> Task ID:
>   task_1441881651073_95266_m_09
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1441881651073_95266=task_1441881651073_95266_m_09
> -
> Diagnostic Messages for this Task:
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.metastore.api.MetaException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
>   at java.lang.Class.privateGetPublicMethods(Class.java:2690)
>   at java.lang.Class.getMethods(Class.java:1467)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:54)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:49)
>   at com.sun.beans.util.Cache.get(Cache.java:127)
>   at com.sun.beans.finder.MethodFinder.findMethod(MethodFinder.java:81)
>   at java.beans.Statement.getMethod(Statement.java:357)
>   at java.beans.Statement.invokeInternal(Statement.java:261)
>   at java.beans.Statement.access$000(Statement.java:58)
>   at java.beans.Statement$2.run(Statement.java:185)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.beans.Statement.invoke(Statement.java:182)
>   at java.beans.Expression.getValue(Expression.java:153)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:166)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> com.sun.beans.decoder.ElementHandler.getContextBean(ElementHandler.java:113)
>   at 
> com.sun.beans.decoder.NewElementHandler.getContextBean(NewElementHandler.java:109)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:146)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> 

[jira] [Commented] (HIVE-12088) a simple insert hql throws out NoClassFoundException of MetaException

2015-10-19 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963067#comment-14963067
 ] 

Feng Yuan commented on HIVE-12088:
--

sorry i may be make a confuse.
the "error never come out again" i said is mean the msck repair table abc get 
worked,but insert into still cant work.
i guess is may be the incompatibility between 0.14 and 1.2.1 metastore data.
so insteads i copy the metadata from 0.14 and source two files:
upgrade-0.14.0-to-1.1.0.mysql.sql
upgrade-1.1.0-to-1.2.0.mysql.sql
but it is useless.
could you gave me some suggestions?[~xuefuz],thanks!

> a simple insert hql throws out NoClassFoundException of MetaException
> -
>
> Key: HIVE-12088
> URL: https://issues.apache.org/jira/browse/HIVE-12088
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Feng Yuan
> Fix For: 1.2.2
>
> Attachments: hive.log
>
>
> example:
> from portrait.rec_feature_feedback a insert overwrite table portrait.test1 
> select iid, feedback_15day, feedback_7day, feedback_5day, feedback_3day, 
> feedback_1day where l_date = '2015-09-09' and bid in 
> ('949722CF_12F7_523A_EE21_E3D591B7E755');
> log shows:
> Query ID = hadoop_20151012153841_120bee59-56a7-4e53-9c45-76f97c0f50ad
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1441881651073_95266, Tracking URL = 
> http://bjlg-44p12-rm01:8088/proxy/application_1441881651073_95266/
> Kill Command = /opt/hadoop/hadoop/bin/hadoop job  -kill 
> job_1441881651073_95266
> Hadoop job information for Stage-1: number of mappers: 21; number of 
> reducers: 0
> 2015-10-12 15:39:29,930 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:39,597 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:40,658 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:53,479 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:54,535 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:55,588 Stage-1 map = 10%,  reduce = 0%
> 2015-10-12 15:39:56,626 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:57,687 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:40:06,096 Stage-1 map = 100%,  reduce = 0%
> Ended Job = job_1441881651073_95266 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1441881651073_95266_m_00 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_16 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_11 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_18 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_02 (and more) from job 
> job_1441881651073_95266
> Task with the most failures(4): 
> -
> Task ID:
>   task_1441881651073_95266_m_09
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1441881651073_95266=task_1441881651073_95266_m_09
> -
> Diagnostic Messages for this Task:
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.metastore.api.MetaException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
>   at java.lang.Class.privateGetPublicMethods(Class.java:2690)
>   at java.lang.Class.getMethods(Class.java:1467)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:54)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:49)
>   at com.sun.beans.util.Cache.get(Cache.java:127)
>   at com.sun.beans.finder.MethodFinder.findMethod(MethodFinder.java:81)
>   at java.beans.Statement.getMethod(Statement.java:357)
>   at java.beans.Statement.invokeInternal(Statement.java:261)
>   at java.beans.Statement.access$000(Statement.java:58)
>   at java.beans.Statement$2.run(Statement.java:185)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.beans.Statement.invoke(Statement.java:182)
>   at java.beans.Expression.getValue(Expression.java:153)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:166)
>   at 
> 

[jira] [Commented] (HIVE-12088) a simple insert hql throws out NoClassFoundException of MetaException

2015-10-19 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963085#comment-14963085
 ] 

Feng Yuan commented on HIVE-12088:
--

there are two other error in this job:
2015-10-19 17:27:37,015 ERROR [main]: mr.ExecDriver 
(ExecDriver.java:execute(400)) - yarn
2015-10-19 17:27:38,424 WARN  [main]: jdbc.JDBCStatsPublisher 
(JDBCStatsPublisher.java:init(310)) - Failed to update ID (size 255)
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in 
your SQL syntax; check the manual that corresponds to your MySQL server version 
for the right syntax to use near 'VARCHAR(4000)' at line 1
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.Util.getInstance(Util.java:381)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1030)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3558)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3490)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1959)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2109)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2642)
at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1647)
at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1566)
at 
org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher.init(JDBCStatsPublisher.java:304)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:411)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
2015-10-19 17:27:38,558 INFO  [main]: exec.Utilities 
(Utilities.java:getBaseWork(390)) - PLAN PATH = 
hdfs://bfdhadoop26/tmp/hive/hadoop/87b4cb59-82e2-4b8d-a66b-0ecd9587e14a/hive_2015-10-19_17-27-31_247_5519557068960011437-1/-mr-10003/d9d465cb-1b84-41d3-a23a-a4d6e511fe9c/map.xml
2015-10-19 17:27:38,559 INFO  [main]: exec.Utilities 
(Utilities.java:getBaseWork(390)) - PLAN PATH = 
hdfs://bfdhadoop26/tmp/hive/hadoop/87b4cb59-82e2-4b8d-a66b-0ecd9587e14a/hive_2015-10-19_17-27-31_247_5519557068960011437-1/-mr-10003/d9d465cb-1b84-41d3-a23a-a4d6e511fe9c/reduce.xml
2015-10-19 17:27:38,559 INFO  [main]: exec.Utilities 
(Utilities.java:getBaseWork(400)) - ***non-local mode***
2015-10-19 17:27:38,560 INFO  [main]: exec.Utilities 
(Utilities.java:getBaseWork(404)) - local path = 
hdfs://bfdhadoop26/tmp/hive/hadoop/87b4cb59-82e2-4b8d-a66b-0ecd9587e14a/hive_2015-10-19_17-27-31_247_5519557068960011437-1/-mr-10003/d9d465cb-1b84-41d3-a23a-a4d6e511fe9c/reduce.xml
2015-10-19 17:27:38,560 INFO  [main]: exec.Utilities 
(Utilities.java:getBaseWork(416)) - Open file to read in plan: 
hdfs://bfdhadoop26/tmp/hive/hadoop/87b4cb59-82e2-4b8d-a66b-0ecd9587e14a/hive_2015-10-19_17-27-31_247_5519557068960011437-1/-mr-10003/d9d465cb-1b84-41d3-a23a-a4d6e511fe9c/reduce.xml
2015-10-19 17:27:38,597 INFO  [main]: exec.Utilities 
(Utilities.java:getBaseWork(456)) - File not found: File does not exist: 

[jira] [Commented] (HIVE-11519) kryo.KryoException: Encountered unregistered

2015-10-15 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958785#comment-14958785
 ] 

Feng Yuan commented on HIVE-11519:
--

is these all kryo`s bug beyong 2.22? @[~gopalv] @[~xuefuz]

> kryo.KryoException: Encountered unregistered
> 
>
> Key: HIVE-11519
> URL: https://issues.apache.org/jira/browse/HIVE-11519
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.13.1, 1.2.0, 1.2.1
> Environment: hadoop 2.5.0 cdh 5.3.2,hive-0.13.1-cdh5.3.2
>Reporter: duyanlong
>Assignee: duyanlong
>Priority: Critical
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> In the hive when the clients to execute HQL, occasionally the following 
> exception, please help solve, thank you
> Error: java.lang.RuntimeException: 
> org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered 
> unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:277)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:258)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:451)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:444)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:588)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> Encountered unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:943)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12088) a simple insert hql throws out NoClassFoundException of MetaException

2015-10-14 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-12088:
-
Attachment: hive.log

> a simple insert hql throws out NoClassFoundException of MetaException
> -
>
> Key: HIVE-12088
> URL: https://issues.apache.org/jira/browse/HIVE-12088
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Feng Yuan
> Fix For: 1.2.2
>
> Attachments: hive.log
>
>
> example:
> from portrait.rec_feature_feedback a insert overwrite table portrait.test1 
> select iid, feedback_15day, feedback_7day, feedback_5day, feedback_3day, 
> feedback_1day where l_date = '2015-09-09' and bid in 
> ('949722CF_12F7_523A_EE21_E3D591B7E755');
> log shows:
> Query ID = hadoop_20151012153841_120bee59-56a7-4e53-9c45-76f97c0f50ad
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1441881651073_95266, Tracking URL = 
> http://bjlg-44p12-rm01:8088/proxy/application_1441881651073_95266/
> Kill Command = /opt/hadoop/hadoop/bin/hadoop job  -kill 
> job_1441881651073_95266
> Hadoop job information for Stage-1: number of mappers: 21; number of 
> reducers: 0
> 2015-10-12 15:39:29,930 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:39,597 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:40,658 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:53,479 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:54,535 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:55,588 Stage-1 map = 10%,  reduce = 0%
> 2015-10-12 15:39:56,626 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:57,687 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:40:06,096 Stage-1 map = 100%,  reduce = 0%
> Ended Job = job_1441881651073_95266 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1441881651073_95266_m_00 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_16 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_11 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_18 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_02 (and more) from job 
> job_1441881651073_95266
> Task with the most failures(4): 
> -
> Task ID:
>   task_1441881651073_95266_m_09
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1441881651073_95266=task_1441881651073_95266_m_09
> -
> Diagnostic Messages for this Task:
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.metastore.api.MetaException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
>   at java.lang.Class.privateGetPublicMethods(Class.java:2690)
>   at java.lang.Class.getMethods(Class.java:1467)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:54)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:49)
>   at com.sun.beans.util.Cache.get(Cache.java:127)
>   at com.sun.beans.finder.MethodFinder.findMethod(MethodFinder.java:81)
>   at java.beans.Statement.getMethod(Statement.java:357)
>   at java.beans.Statement.invokeInternal(Statement.java:261)
>   at java.beans.Statement.access$000(Statement.java:58)
>   at java.beans.Statement$2.run(Statement.java:185)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.beans.Statement.invoke(Statement.java:182)
>   at java.beans.Expression.getValue(Expression.java:153)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:166)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> com.sun.beans.decoder.ElementHandler.getContextBean(ElementHandler.java:113)
>   at 
> com.sun.beans.decoder.NewElementHandler.getContextBean(NewElementHandler.java:109)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:146)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> 

[jira] [Resolved] (HIVE-12088) a simple insert hql throws out NoClassFoundException of MetaException

2015-10-14 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan resolved HIVE-12088.
--
Resolution: REMIND

when i source 
apache-hive-1.2.1-bin/scripts/metastore/upgrade/mysql/hive-schema-0.14.0.mysql.sql
 the error nerver come out again.

> a simple insert hql throws out NoClassFoundException of MetaException
> -
>
> Key: HIVE-12088
> URL: https://issues.apache.org/jira/browse/HIVE-12088
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Feng Yuan
> Fix For: 1.2.2
>
> Attachments: hive.log
>
>
> example:
> from portrait.rec_feature_feedback a insert overwrite table portrait.test1 
> select iid, feedback_15day, feedback_7day, feedback_5day, feedback_3day, 
> feedback_1day where l_date = '2015-09-09' and bid in 
> ('949722CF_12F7_523A_EE21_E3D591B7E755');
> log shows:
> Query ID = hadoop_20151012153841_120bee59-56a7-4e53-9c45-76f97c0f50ad
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1441881651073_95266, Tracking URL = 
> http://bjlg-44p12-rm01:8088/proxy/application_1441881651073_95266/
> Kill Command = /opt/hadoop/hadoop/bin/hadoop job  -kill 
> job_1441881651073_95266
> Hadoop job information for Stage-1: number of mappers: 21; number of 
> reducers: 0
> 2015-10-12 15:39:29,930 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:39,597 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:40,658 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:53,479 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:54,535 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:55,588 Stage-1 map = 10%,  reduce = 0%
> 2015-10-12 15:39:56,626 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:57,687 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:40:06,096 Stage-1 map = 100%,  reduce = 0%
> Ended Job = job_1441881651073_95266 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1441881651073_95266_m_00 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_16 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_11 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_18 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_02 (and more) from job 
> job_1441881651073_95266
> Task with the most failures(4): 
> -
> Task ID:
>   task_1441881651073_95266_m_09
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1441881651073_95266=task_1441881651073_95266_m_09
> -
> Diagnostic Messages for this Task:
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.metastore.api.MetaException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
>   at java.lang.Class.privateGetPublicMethods(Class.java:2690)
>   at java.lang.Class.getMethods(Class.java:1467)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:54)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:49)
>   at com.sun.beans.util.Cache.get(Cache.java:127)
>   at com.sun.beans.finder.MethodFinder.findMethod(MethodFinder.java:81)
>   at java.beans.Statement.getMethod(Statement.java:357)
>   at java.beans.Statement.invokeInternal(Statement.java:261)
>   at java.beans.Statement.access$000(Statement.java:58)
>   at java.beans.Statement$2.run(Statement.java:185)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.beans.Statement.invoke(Statement.java:182)
>   at java.beans.Expression.getValue(Expression.java:153)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:166)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> com.sun.beans.decoder.ElementHandler.getContextBean(ElementHandler.java:113)
>   at 
> com.sun.beans.decoder.NewElementHandler.getContextBean(NewElementHandler.java:109)
>   at 
> 

[jira] [Updated] (HIVE-11519) kryo.KryoException: Encountered unregistered

2015-10-12 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11519:
-
Due Date: 15/Oct/15  (was: 6/Aug/15)

> kryo.KryoException: Encountered unregistered
> 
>
> Key: HIVE-11519
> URL: https://issues.apache.org/jira/browse/HIVE-11519
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.13.1, 1.2.0, 1.2.1
> Environment: hadoop 2.5.0 cdh 5.3.2,hive-0.13.1-cdh5.3.2
>Reporter: duyanlong
>Assignee: duyanlong
>Priority: Critical
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> In the hive when the clients to execute HQL, occasionally the following 
> exception, please help solve, thank you
> Error: java.lang.RuntimeException: 
> org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered 
> unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:277)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:258)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:451)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:444)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:588)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> Encountered unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:943)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11519) kryo.KryoException: Encountered unregistered

2015-10-12 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952684#comment-14952684
 ] 

Feng Yuan commented on HIVE-11519:
--

why does 1.2.1 would set kryo as the default serde tool?
as it say this is a big bug of kryo.

> kryo.KryoException: Encountered unregistered
> 
>
> Key: HIVE-11519
> URL: https://issues.apache.org/jira/browse/HIVE-11519
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.13.1, 1.2.0, 1.2.1
> Environment: hadoop 2.5.0 cdh 5.3.2,hive-0.13.1-cdh5.3.2
>Reporter: duyanlong
>Assignee: duyanlong
>Priority: Critical
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> In the hive when the clients to execute HQL, occasionally the following 
> exception, please help solve, thank you
> Error: java.lang.RuntimeException: 
> org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered 
> unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:277)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:258)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:451)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:444)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:588)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> Encountered unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:943)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11519) kryo.KryoException: Encountered unregistered

2015-10-12 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11519:
-
Affects Version/s: 1.2.0
   1.2.1

> kryo.KryoException: Encountered unregistered
> 
>
> Key: HIVE-11519
> URL: https://issues.apache.org/jira/browse/HIVE-11519
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.13.1, 1.2.0, 1.2.1
> Environment: hadoop 2.5.0 cdh 5.3.2,hive-0.13.1-cdh5.3.2
>Reporter: duyanlong
>Assignee: duyanlong
>Priority: Critical
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> In the hive when the clients to execute HQL, occasionally the following 
> exception, please help solve, thank you
> Error: java.lang.RuntimeException: 
> org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered 
> unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:277)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:258)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:451)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:444)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:588)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> Encountered unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:943)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11519) kryo.KryoException: Encountered unregistered

2015-10-12 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11519:
-
Priority: Critical  (was: Minor)

> kryo.KryoException: Encountered unregistered
> 
>
> Key: HIVE-11519
> URL: https://issues.apache.org/jira/browse/HIVE-11519
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.13.1, 1.2.0, 1.2.1
> Environment: hadoop 2.5.0 cdh 5.3.2,hive-0.13.1-cdh5.3.2
>Reporter: duyanlong
>Assignee: duyanlong
>Priority: Critical
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> In the hive when the clients to execute HQL, occasionally the following 
> exception, please help solve, thank you
> Error: java.lang.RuntimeException: 
> org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered 
> unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:277)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:258)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:451)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:444)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:588)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> Encountered unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:943)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12088) a simple insert hql throws out NoClassFoundException of MetaException

2015-10-12 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952813#comment-14952813
 ] 

Feng Yuan commented on HIVE-12088:
--

is there any help?

> a simple insert hql throws out NoClassFoundException of MetaException
> -
>
> Key: HIVE-12088
> URL: https://issues.apache.org/jira/browse/HIVE-12088
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Feng Yuan
> Fix For: 1.2.2
>
>
> example:
> from portrait.rec_feature_feedback a insert overwrite table portrait.test1 
> select iid, feedback_15day, feedback_7day, feedback_5day, feedback_3day, 
> feedback_1day where l_date = '2015-09-09' and bid in 
> ('949722CF_12F7_523A_EE21_E3D591B7E755');
> log shows:
> Query ID = hadoop_20151012153841_120bee59-56a7-4e53-9c45-76f97c0f50ad
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1441881651073_95266, Tracking URL = 
> http://bjlg-44p12-rm01:8088/proxy/application_1441881651073_95266/
> Kill Command = /opt/hadoop/hadoop/bin/hadoop job  -kill 
> job_1441881651073_95266
> Hadoop job information for Stage-1: number of mappers: 21; number of 
> reducers: 0
> 2015-10-12 15:39:29,930 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:39,597 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:40,658 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:53,479 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:54,535 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:39:55,588 Stage-1 map = 10%,  reduce = 0%
> 2015-10-12 15:39:56,626 Stage-1 map = 5%,  reduce = 0%
> 2015-10-12 15:39:57,687 Stage-1 map = 0%,  reduce = 0%
> 2015-10-12 15:40:06,096 Stage-1 map = 100%,  reduce = 0%
> Ended Job = job_1441881651073_95266 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1441881651073_95266_m_00 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_16 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_11 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_18 (and more) from job 
> job_1441881651073_95266
> Examining task ID: task_1441881651073_95266_m_02 (and more) from job 
> job_1441881651073_95266
> Task with the most failures(4): 
> -
> Task ID:
>   task_1441881651073_95266_m_09
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1441881651073_95266=task_1441881651073_95266_m_09
> -
> Diagnostic Messages for this Task:
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.metastore.api.MetaException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
>   at java.lang.Class.privateGetPublicMethods(Class.java:2690)
>   at java.lang.Class.getMethods(Class.java:1467)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:54)
>   at com.sun.beans.finder.MethodFinder$1.create(MethodFinder.java:49)
>   at com.sun.beans.util.Cache.get(Cache.java:127)
>   at com.sun.beans.finder.MethodFinder.findMethod(MethodFinder.java:81)
>   at java.beans.Statement.getMethod(Statement.java:357)
>   at java.beans.Statement.invokeInternal(Statement.java:261)
>   at java.beans.Statement.access$000(Statement.java:58)
>   at java.beans.Statement$2.run(Statement.java:185)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.beans.Statement.invoke(Statement.java:182)
>   at java.beans.Expression.getValue(Expression.java:153)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:166)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> com.sun.beans.decoder.ElementHandler.getContextBean(ElementHandler.java:113)
>   at 
> com.sun.beans.decoder.NewElementHandler.getContextBean(NewElementHandler.java:109)
>   at 
> com.sun.beans.decoder.ObjectElementHandler.getValueObject(ObjectElementHandler.java:146)
>   at 
> com.sun.beans.decoder.NewElementHandler.getValueObject(NewElementHandler.java:123)
>   at 
> 

[jira] [Commented] (HIVE-9753) Wrong results when using multiple levels of Joins. When table alias of one of the table is null with left outer joins.

2015-09-29 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934720#comment-14934720
 ] 

Feng Yuan commented on HIVE-9753:
-

[~gopalv]]

> Wrong results when using multiple levels of Joins. When table alias of one of 
> the table is null with left outer joins.  
> 
>
> Key: HIVE-9753
> URL: https://issues.apache.org/jira/browse/HIVE-9753
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.0.0
>Reporter: Pavan Srinivas
>Priority: Critical
> Attachments: HIVE-9753.0-0.14.0.patch, HIVE-9753.0-1.0.0.patch, 
> HIVE-9753.patch, table1.data, table2.data, table3.data
>
>
> Let take scenario, where the tables are:
> {code}
> drop table table1;
> CREATE TABLE table1(
>   col1 string,
>   col2 string,
>   col3 string,
>   col4 string
>   )
> ROW FORMAT DELIMITED
>   FIELDS TERMINATED BY '\t'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
> drop table table2;
> CREATE  TABLE table2(
>   col1 string,
>   col2 bigint,
>   col3 string,
>   col4 string
>   )
> ROW FORMAT DELIMITED
>   FIELDS TERMINATED BY '\t'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
> drop table table3;
> CREATE  TABLE table3(
>   col1 string,
>   col2 int,
>   col3 int,
>   col4 string)
> ROW FORMAT DELIMITED
>   FIELDS TERMINATED BY '\t'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
> {code}
> Query with wrong results:
> {code}
> SELECT t1.col1 AS dummy,
> t1.expected_column AS expected_column,
> t2.col4
> FROM (
> SELECT col1,
> '23-1',
> '23-13' as three,
> col4 AS expected_column
> FROM table1
> ) t1
> JOIN table2 t2
> ON cast(t2.col1 as string) = cast(t1.col1 as string)
> LEFT OUTER JOIN
> (SELECT col4, col1
> FROM table3
> ) t3
> ON t2.col4 = t3.col1  
> ;
> {code}
> and explain output: 
> {code}
> STAGE DEPENDENCIES:
>   Stage-7 is a root stage
>   Stage-5 depends on stages: Stage-7
>   Stage-0 depends on stages: Stage-5
> STAGE PLANS:
>   Stage: Stage-7
> Map Reduce Local Work
>   Alias -> Map Local Tables:
> t1:table1
>   Fetch Operator
> limit: -1
> t3:table3
>   Fetch Operator
> limit: -1
>   Alias -> Map Local Operator Tree:
> t1:table1
>   TableScan
> alias: table1
> Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
> Filter Operator
>   predicate: col1 is not null (type: boolean)
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   Select Operator
> expressions: col1 (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
> HashTable Sink Operator
>   condition expressions:
> 0
> 1 {col4}
>   keys:
> 0 _col0 (type: string)
> 1 col1 (type: string)
> t3:table3
>   TableScan
> alias: table3
> Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
> Select Operator
>   expressions: col1 (type: string)
>   outputColumnNames: _col1
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   HashTable Sink Operator
> condition expressions:
>   0 {_col0} {_col7} {_col7}
>   1
> keys:
>   0 _col7 (type: string)
>   1 _col1 (type: string)
>   Stage: Stage-5
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: t2
> Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
> Filter Operator
>   predicate: col1 is not null (type: boolean)
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   Map Join Operator
> condition map:
>  Inner Join 0 to 1
> condition expressions:
>   0 {_col0}
>   1 {col4}
> keys:
>   0 _col0 (type: string)
>   1 col1 (type: string)
> outputColumnNames: 

[jira] [Commented] (HIVE-11930) how to prevent ppd the topN(a) udf predication in where clause?

2015-09-29 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934780#comment-14934780
 ] 

Feng Yuan commented on HIVE-11930:
--

hi [~ashutoshc]this cant be used in where clause.

> how to prevent ppd the topN(a) udf predication in where clause?
> ---
>
> Key: HIVE-11930
> URL: https://issues.apache.org/jira/browse/HIVE-11930
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Minor
>
> select 
> a.state_date,a.customer,a.taskid,a.step_id,a.exit_title,a.pv,top1000(a.only_id)
>   from
> (  select 
> t1.state_date,t1.customer,t1.taskid,t1.step_id,t1.exit_title,t1.pv,t1.only_id
>   from 
>   ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
>order by t1.only_id,t1.pv desc
>  )a
>   where  a.customer='Cdianyingwang'
>   and a.taskid='33'
>   and a.step_id='0' 
>   and top1000(a.only_id)<=10;
> in above example:
> outer top1000(a.only_id)<=10;will ppd to:
> stage 1:
> ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
> and this stage have 2 reduce,so you can see this will output 20 records,
> upon to outer stage,the final results is exactly this 20 records.
> so i want to know is there any way to hint this topN udf predication not to 
> ppd?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-11400) insert overwrite task awasy stuck at latest job

2015-09-29 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan resolved HIVE-11400.
--
Resolution: Cannot Reproduce

> insert overwrite task awasy stuck at latest job
> ---
>
> Key: HIVE-11400
> URL: https://issues.apache.org/jira/browse/HIVE-11400
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Query Processor
>Affects Versions: 0.14.0
> Environment: hadoop 2.6.0,centos 6.5
>Reporter: Feng Yuan
> Attachments: failed_logs, success_logs, task_explain
>
>
> when i run a task like "insert overwrite table a (select *  from b join 
> select * from c  on b.id=c.id) tmp;" it will get stuck on latest job.(eg. the 
> parser explain the task has 3 jobs,but the third job(or stage) will never get 
> executed).
> there have two files:
> 1.hql explain file.
> 2.running logs.
> you will see the stage-0 in explain file is Move Operation,but you will not 
> see it in the running logs.and the fact is 16 of 17 jobs has 
> complete(actually the 13th job get lost?i dont see anywhere in the logs),but 
> the 17th job get hanged forever.and even it not bean assigned a jobid and 
> launched!
> there are someone can help this?
> Thanks for you very much~!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11930) how to prevent ppd the topN(a) udf predication in where clause?

2015-09-29 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936180#comment-14936180
 ] 

Feng Yuan commented on HIVE-11930:
--

do you mean this:

@UDFType(stateful=true)
public class top1000() extends UDF{}

i try like this,but my sql is:
...
  where  a.customer='Cdianyingwang'
  and a.taskid='33'
  and a.step_id='0' 
  and top1000(a.only_id)<=10;

complier say top1000 shouldnt place in where clause.

> how to prevent ppd the topN(a) udf predication in where clause?
> ---
>
> Key: HIVE-11930
> URL: https://issues.apache.org/jira/browse/HIVE-11930
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Minor
>
> select 
> a.state_date,a.customer,a.taskid,a.step_id,a.exit_title,a.pv,top1000(a.only_id)
>   from
> (  select 
> t1.state_date,t1.customer,t1.taskid,t1.step_id,t1.exit_title,t1.pv,t1.only_id
>   from 
>   ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
>order by t1.only_id,t1.pv desc
>  )a
>   where  a.customer='Cdianyingwang'
>   and a.taskid='33'
>   and a.step_id='0' 
>   and top1000(a.only_id)<=10;
> in above example:
> outer top1000(a.only_id)<=10;will ppd to:
> stage 1:
> ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
> and this stage have 2 reduce,so you can see this will output 20 records,
> upon to outer stage,the final results is exactly this 20 records.
> so i want to know is there any way to hint this topN udf predication not to 
> ppd?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9067) OrcFileMergeOperator may create merge file that does not match properties of input files

2015-09-28 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14910124#comment-14910124
 ] 

Feng Yuan commented on HIVE-9067:
-

is there anything need to be append(code modification like sth),because this 
patch is applyed in 1.0.0 but if i want to use it in 0.14.0?

> OrcFileMergeOperator may create merge file that does not match properties of 
> input files
> 
>
> Key: HIVE-9067
> URL: https://issues.apache.org/jira/browse/HIVE-9067
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 0.15.0, 0.14.1
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
>  Labels: Orc
> Fix For: 1.0.0
>
> Attachments: HIVE-9067.1.patch, HIVE-9067.2.patch
>
>
> OrcFileMergeOperator creates a new ORC file and appends the stripes from 
> smaller orc file. This new ORC file creation should retain the same 
> configuration as the small ORC files. Currently it does not set the orc row 
> index stride and file version. Also merging of stripe statistics to file 
> statistics was incorrect leading to issues like in HIVE-9080



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9067) OrcFileMergeOperator may create merge file that does not match properties of input files

2015-09-28 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14910117#comment-14910117
 ] 

Feng Yuan commented on HIVE-9067:
-

0.14.0 thanks!

> OrcFileMergeOperator may create merge file that does not match properties of 
> input files
> 
>
> Key: HIVE-9067
> URL: https://issues.apache.org/jira/browse/HIVE-9067
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 0.15.0, 0.14.1
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
>  Labels: Orc
> Fix For: 1.0.0
>
> Attachments: HIVE-9067.1.patch, HIVE-9067.2.patch
>
>
> OrcFileMergeOperator creates a new ORC file and appends the stripes from 
> smaller orc file. This new ORC file creation should retain the same 
> configuration as the small ORC files. Currently it does not set the orc row 
> index stride and file version. Also merging of stripe statistics to file 
> statistics was incorrect leading to issues like in HIVE-9080



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9067) OrcFileMergeOperator may create merge file that does not match properties of input files

2015-09-28 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14933202#comment-14933202
 ] 

Feng Yuan commented on HIVE-9067:
-

hi Prasanth Jayachandran ,i have try it again,and i still cant get it through,i 
get these two errors:

when i try "insert into table orc_table select * from test_table;"--example in 
HIVE-9080.
i get:
[Error 30017]: Skipping stats aggregation by error 
org.apache.hadoop.hive.ql.metadata.HiveException: [Error 30001]: StatsPublisher 
cannot be initialized. There was a error in the initialization of 
StatsPublisher, and retrying might help. If you dont want the query to fail 
because accurate statistics could not be collected, set 
hive.stats.reliable=false

when try this: alter table orc_table concatenate,get:
Error: java.lang.IllegalArgumentException: Column has wrong number of index 
entries found: 0 expected: 1
at 
org.apache.hadoop.hive.ql.io.orc.WriterImpl$TreeWriter.writeStripe(WriterImpl.java:726)
at 
org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.writeStripe(WriterImpl.java:1614)
at 
org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushStripe(WriterImpl.java:1996)
at 
org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:2288)
at 
org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.closeOp(OrcFileMergeOperator.java:215)
at 
org.apache.hadoop.hive.ql.io.merge.MergeFileMapper.close(MergeFileMapper.java:98)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)


FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.DDLTask

if there anything i need to done?
can you gave me some suggestions?
thank you and have a good day!!

> OrcFileMergeOperator may create merge file that does not match properties of 
> input files
> 
>
> Key: HIVE-9067
> URL: https://issues.apache.org/jira/browse/HIVE-9067
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 0.15.0, 0.14.1
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
>  Labels: Orc
> Fix For: 1.0.0
>
> Attachments: HIVE-9067.1.patch, HIVE-9067.2.patch
>
>
> OrcFileMergeOperator creates a new ORC file and appends the stripes from 
> smaller orc file. This new ORC file creation should retain the same 
> configuration as the small ORC files. Currently it does not set the orc row 
> index stride and file version. Also merging of stripe statistics to file 
> statistics was incorrect leading to issues like in HIVE-9080



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9753) Wrong results when using multiple levels of Joins. When table alias of one of the table is null with left outer joins.

2015-09-28 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14933207#comment-14933207
 ] 

Feng Yuan commented on HIVE-9753:
-

Gopal V/Sergey Shelukhin Can someone plz review this patch? 

> Wrong results when using multiple levels of Joins. When table alias of one of 
> the table is null with left outer joins.  
> 
>
> Key: HIVE-9753
> URL: https://issues.apache.org/jira/browse/HIVE-9753
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.0.0
>Reporter: Pavan Srinivas
>Priority: Critical
> Attachments: HIVE-9753.0-0.14.0.patch, HIVE-9753.0-1.0.0.patch, 
> HIVE-9753.patch, table1.data, table2.data, table3.data
>
>
> Let take scenario, where the tables are:
> {code}
> drop table table1;
> CREATE TABLE table1(
>   col1 string,
>   col2 string,
>   col3 string,
>   col4 string
>   )
> ROW FORMAT DELIMITED
>   FIELDS TERMINATED BY '\t'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
> drop table table2;
> CREATE  TABLE table2(
>   col1 string,
>   col2 bigint,
>   col3 string,
>   col4 string
>   )
> ROW FORMAT DELIMITED
>   FIELDS TERMINATED BY '\t'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
> drop table table3;
> CREATE  TABLE table3(
>   col1 string,
>   col2 int,
>   col3 int,
>   col4 string)
> ROW FORMAT DELIMITED
>   FIELDS TERMINATED BY '\t'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
> {code}
> Query with wrong results:
> {code}
> SELECT t1.col1 AS dummy,
> t1.expected_column AS expected_column,
> t2.col4
> FROM (
> SELECT col1,
> '23-1',
> '23-13' as three,
> col4 AS expected_column
> FROM table1
> ) t1
> JOIN table2 t2
> ON cast(t2.col1 as string) = cast(t1.col1 as string)
> LEFT OUTER JOIN
> (SELECT col4, col1
> FROM table3
> ) t3
> ON t2.col4 = t3.col1  
> ;
> {code}
> and explain output: 
> {code}
> STAGE DEPENDENCIES:
>   Stage-7 is a root stage
>   Stage-5 depends on stages: Stage-7
>   Stage-0 depends on stages: Stage-5
> STAGE PLANS:
>   Stage: Stage-7
> Map Reduce Local Work
>   Alias -> Map Local Tables:
> t1:table1
>   Fetch Operator
> limit: -1
> t3:table3
>   Fetch Operator
> limit: -1
>   Alias -> Map Local Operator Tree:
> t1:table1
>   TableScan
> alias: table1
> Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
> Filter Operator
>   predicate: col1 is not null (type: boolean)
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   Select Operator
> expressions: col1 (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
> HashTable Sink Operator
>   condition expressions:
> 0
> 1 {col4}
>   keys:
> 0 _col0 (type: string)
> 1 col1 (type: string)
> t3:table3
>   TableScan
> alias: table3
> Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
> Select Operator
>   expressions: col1 (type: string)
>   outputColumnNames: _col1
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   HashTable Sink Operator
> condition expressions:
>   0 {_col0} {_col7} {_col7}
>   1
> keys:
>   0 _col7 (type: string)
>   1 _col1 (type: string)
>   Stage: Stage-5
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: t2
> Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
> Filter Operator
>   predicate: col1 is not null (type: boolean)
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   Map Join Operator
> condition map:
>  Inner Join 0 to 1
> condition expressions:
>   0 {_col0}
>   1 {col4}
> keys:
>   0 _col0 (type: string)
>   1 col1 (type: 

[jira] [Commented] (HIVE-9067) OrcFileMergeOperator may create merge file that does not match properties of input files

2015-09-28 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934488#comment-14934488
 ] 

Feng Yuan commented on HIVE-9067:
-

thank you for your help!i will dive into it to make out how could it works in 
0.14.0.

> OrcFileMergeOperator may create merge file that does not match properties of 
> input files
> 
>
> Key: HIVE-9067
> URL: https://issues.apache.org/jira/browse/HIVE-9067
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 0.15.0, 0.14.1
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
>  Labels: Orc
> Fix For: 1.0.0
>
> Attachments: HIVE-9067.1.patch, HIVE-9067.2.patch
>
>
> OrcFileMergeOperator creates a new ORC file and appends the stripes from 
> smaller orc file. This new ORC file creation should retain the same 
> configuration as the small ORC files. Currently it does not set the orc row 
> index stride and file version. Also merging of stripe statistics to file 
> statistics was incorrect leading to issues like in HIVE-9080



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9067) OrcFileMergeOperator may create merge file that does not match properties of input files

2015-09-28 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934452#comment-14934452
 ] 

Feng Yuan commented on HIVE-9067:
-

like this:
test_table`s data:
imgtop\N\N\N00mobile_contentgeneral66naMinixexact_matchAndroidfalsefalsetrueNEO-X7-216A0216Mozilla/5.0
 (Linux; U; Android 4.2.2; ar-eg; NEO-X7-216A Build/JDQ39) AppleWebKit/534.30 
(KHTML, like Gecko) Version/4.0 Safari/534.30none\NOnline Breedband 
B.V.eindhovenNL83.117.26.125truetrue51.445.48\N565406014182415864288907390719148304469jsonLive122\N14182415864288907390719148304469mobapp00||38664.14.181.22017597571-80a7-11e4-b1d6-5254000414181696001418238000
imgtop\N65772487\N00politicsmobile_contentdatinggeneralringtones66naMicrosoftexact_matchWindowsfalsefalsefalseGeneric
 Windows Mobile320240Mozilla/4.0 (compatible; MSIE 4.01; Windows CE; 
PPC)none\NAircel Ltd.kolkatamobile wirelessIN101.218.108.20mobile 
gatewaytruetrue22.569702148437588.36968994140625DISHNETIND\N28014182415864284741441834639756033xmlLive76\N14182415864284741441834639756033mobweb00||6054.175.7.64\N\N14181696001418238000
imgtop\N\N\N00mobile_contentgeneral66naMinixexact_matchAndroidfalsefalsetrueNEO-X7-216A0216Mozilla/5.0
 (Linux; U; Android 4.2.2; ar-eg; NEO-X7-216A Build/JDQ39) AppleWebKit/534.30 
(KHTML, like Gecko) Version/4.0 Safari/534.30none\NOnline Breedband 
B.V.eindhovenNL83.117.26.125truetrue51.445.48\N565406014182415864288907390719148304469jsonLive122\N14182415864288907390719148304469mobapp00||38664.14.181.22017597571-80a7-11e4-b1d6-5254000414181696001418238000
imgtop\N65772487\N00politicsmobile_contentdatinggeneralringtones66naMicrosoftexact_matchWindowsfalsefalsefalseGeneric
 Windows Mobile320240Mozilla/4.0 (compatible; MSIE 4.01; Windows CE; 
PPC)none\NAircel Ltd.kolkatamobile wirelessIN101.218.108.20mobile 
gatewaytruetrue22.569702148437588.36968994140625DISHNETIND\N28014182415864284741441834639756033xmlLive76\N14182415864284741441834639756033mobweb00||6054.175.7.64\N\N14181696001418238000

it`s exactly the reproducible in the HIVE-9080.

> OrcFileMergeOperator may create merge file that does not match properties of 
> input files
> 
>
> Key: HIVE-9067
> URL: https://issues.apache.org/jira/browse/HIVE-9067
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 0.15.0, 0.14.1
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
>  Labels: Orc
> Fix For: 1.0.0
>
> Attachments: HIVE-9067.1.patch, HIVE-9067.2.patch
>
>
> OrcFileMergeOperator creates a new ORC file and appends the stripes from 
> smaller orc file. This new ORC file creation should retain the same 
> configuration as the small ORC files. Currently it does not set the orc row 
> index stride and file version. Also merging of stripe statistics to file 
> statistics was incorrect leading to issues like in HIVE-9080



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9067) OrcFileMergeOperator may create merge file that does not match properties of input files

2015-09-27 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909985#comment-14909985
 ] 

Feng Yuan commented on HIVE-9067:
-

dose this patch really work? i didnt get it work on 0.14. Someone can help me?

> OrcFileMergeOperator may create merge file that does not match properties of 
> input files
> 
>
> Key: HIVE-9067
> URL: https://issues.apache.org/jira/browse/HIVE-9067
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 0.15.0, 0.14.1
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
>  Labels: Orc
> Fix For: 1.0.0
>
> Attachments: HIVE-9067.1.patch, HIVE-9067.2.patch
>
>
> OrcFileMergeOperator creates a new ORC file and appends the stripes from 
> smaller orc file. This new ORC file creation should retain the same 
> configuration as the small ORC files. Currently it does not set the orc row 
> index stride and file version. Also merging of stripe statistics to file 
> statistics was incorrect leading to issues like in HIVE-9080



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11930) how to prevent ppd the topN(a) udf predication in where clause?

2015-09-24 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907523#comment-14907523
 ] 

Feng Yuan commented on HIVE-11930:
--

thanks your reply so much!!

> how to prevent ppd the topN(a) udf predication in where clause?
> ---
>
> Key: HIVE-11930
> URL: https://issues.apache.org/jira/browse/HIVE-11930
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Blocker
>
> select 
> a.state_date,a.customer,a.taskid,a.step_id,a.exit_title,a.pv,top1000(a.only_id)
>   from
> (  select 
> t1.state_date,t1.customer,t1.taskid,t1.step_id,t1.exit_title,t1.pv,t1.only_id
>   from 
>   ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
>order by t1.only_id,t1.pv desc
>  )a
>   where  a.customer='Cdianyingwang'
>   and a.taskid='33'
>   and a.step_id='0' 
>   and top1000(a.only_id)<=10;
> in above example:
> outer top1000(a.only_id)<=10;will ppd to:
> stage 1:
> ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
> and this stage have 2 reduce,so you can see this will output 20 records,
> upon to outer stage,the final results is exactly this 20 records.
> so i want to know is there any way to hint this topN udf predication not to 
> ppd?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11930) how to prevent ppd the topN(a) udf predication in where clause?

2015-09-24 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11930:
-
Priority: Minor  (was: Blocker)

> how to prevent ppd the topN(a) udf predication in where clause?
> ---
>
> Key: HIVE-11930
> URL: https://issues.apache.org/jira/browse/HIVE-11930
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Minor
>
> select 
> a.state_date,a.customer,a.taskid,a.step_id,a.exit_title,a.pv,top1000(a.only_id)
>   from
> (  select 
> t1.state_date,t1.customer,t1.taskid,t1.step_id,t1.exit_title,t1.pv,t1.only_id
>   from 
>   ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
>order by t1.only_id,t1.pv desc
>  )a
>   where  a.customer='Cdianyingwang'
>   and a.taskid='33'
>   and a.step_id='0' 
>   and top1000(a.only_id)<=10;
> in above example:
> outer top1000(a.only_id)<=10;will ppd to:
> stage 1:
> ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
> and this stage have 2 reduce,so you can see this will output 20 records,
> upon to outer stage,the final results is exactly this 20 records.
> so i want to know is there any way to hint this topN udf predication not to 
> ppd?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11930) how to prevent ppd the topN(a) udf predication in where clause?

2015-09-24 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11930:
-
Issue Type: New Feature  (was: Bug)

> how to prevent ppd the topN(a) udf predication in where clause?
> ---
>
> Key: HIVE-11930
> URL: https://issues.apache.org/jira/browse/HIVE-11930
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Blocker
>
> select 
> a.state_date,a.customer,a.taskid,a.step_id,a.exit_title,a.pv,top1000(a.only_id)
>   from
> (  select 
> t1.state_date,t1.customer,t1.taskid,t1.step_id,t1.exit_title,t1.pv,t1.only_id
>   from 
>   ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
>order by t1.only_id,t1.pv desc
>  )a
>   where  a.customer='Cdianyingwang'
>   and a.taskid='33'
>   and a.step_id='0' 
>   and top1000(a.only_id)<=10;
> in above example:
> outer top1000(a.only_id)<=10;will ppd to:
> stage 1:
> ( select t11.state_date,
>t11.customer,
>t11.taskid,
>t11.step_id,
>t11.exit_title,
>t11.pv,
>concat(t11.customer,t11.taskid,t11.step_id) as 
> only_id
>from
>   (  select 
> state_date,customer,taskid,step_id,exit_title,count(*) as pv
>  from bdi_fact2.mid_url_step
>  where exit_url!='-1'
>  and exit_title !='-1'
>  and l_date='2015-08-31'
>  group by 
> state_date,customer,taskid,step_id,exit_title
> )t11
>)t1
> and this stage have 2 reduce,so you can see this will output 20 records,
> upon to outer stage,the final results is exactly this 20 records.
> so i want to know is there any way to hint this topN udf predication not to 
> ppd?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11840) when multi insert the inputformat becomes OneNullRowInputFormat

2015-09-22 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903820#comment-14903820
 ] 

Feng Yuan commented on HIVE-11840:
--

is there anyone can help this?

> when multi insert the inputformat becomes OneNullRowInputFormat
> ---
>
> Key: HIVE-11840
> URL: https://issues.apache.org/jira/browse/HIVE-11840
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Blocker
> Fix For: 0.14.1
>
> Attachments: multi insert, single__insert
>
>
> example:
> from portrait.rec_feature_feedback a 
> insert overwrite table portrait.test1 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('949722CF_12F7_523A_EE21_E3D591B7E755') 
> insert overwrite table portrait.test2 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('test') 
> insert overwrite table portrait.test3 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('F7734668_CC49_8C4F_24C5_EA8B6728E394')
> when single insert it works.but multi insert when i select * from test1:
> NULL NULL NULL NULL NULL NULL.
> i see "explain extended"
> Path -> Alias:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} [a]
> -mr-10007portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Czgc_pc, bid=949722CF_12F7_523A_EE21_E3D591B7E755} [a]
>   Path -> Partition:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} 
>   Partition
> base file name: bid=F7734668_CC49_8C4F_24C5_EA8B6728E394
> input format: org.apache.hadoop.hive.ql.io.OneNullRowInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid F7734668_CC49_8C4F_24C5_EA8B6728E394
>   cid Cyiyaowang
>   l_date 2015-09-09
> but when single insert:
> Path -> Alias:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  [a]
>   Path -> Partition:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  
>   Partition
> base file name: bid=949722CF_12F7_523A_EE21_E3D591B7E755
> input format: org.apache.hadoop.mapred.TextInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid 949722CF_12F7_523A_EE21_E3D591B7E755
>   cid Czgc_pc
>   l_date 2015-09-09



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11840) when multi insert the inputformat becomes OneNullRowInputFormat

2015-09-22 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11840:
-
Priority: Blocker  (was: Critical)

> when multi insert the inputformat becomes OneNullRowInputFormat
> ---
>
> Key: HIVE-11840
> URL: https://issues.apache.org/jira/browse/HIVE-11840
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Blocker
> Fix For: 0.14.1
>
> Attachments: multi insert, single__insert
>
>
> example:
> from portrait.rec_feature_feedback a 
> insert overwrite table portrait.test1 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('949722CF_12F7_523A_EE21_E3D591B7E755') 
> insert overwrite table portrait.test2 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('test') 
> insert overwrite table portrait.test3 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('F7734668_CC49_8C4F_24C5_EA8B6728E394')
> when single insert it works.but multi insert when i select * from test1:
> NULL NULL NULL NULL NULL NULL.
> i see "explain extended"
> Path -> Alias:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} [a]
> -mr-10007portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Czgc_pc, bid=949722CF_12F7_523A_EE21_E3D591B7E755} [a]
>   Path -> Partition:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} 
>   Partition
> base file name: bid=F7734668_CC49_8C4F_24C5_EA8B6728E394
> input format: org.apache.hadoop.hive.ql.io.OneNullRowInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid F7734668_CC49_8C4F_24C5_EA8B6728E394
>   cid Cyiyaowang
>   l_date 2015-09-09
> but when single insert:
> Path -> Alias:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  [a]
>   Path -> Partition:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  
>   Partition
> base file name: bid=949722CF_12F7_523A_EE21_E3D591B7E755
> input format: org.apache.hadoop.mapred.TextInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid 949722CF_12F7_523A_EE21_E3D591B7E755
>   cid Czgc_pc
>   l_date 2015-09-09



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11825) get_json_object(col,'$.a') is null in where clause didn`t work

2015-09-16 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746983#comment-14746983
 ] 

Feng Yuan commented on HIVE-11825:
--

thank your detailed reply,i will try your 2nd way,if you have time ,could you 
please commit your patch about "ALLOW_BACKSLASH_ESCAPING_ANY_CHARACTER "? thank 
you!



> get_json_object(col,'$.a') is null in where clause didn`t work
> --
>
> Key: HIVE-11825
> URL: https://issues.apache.org/jira/browse/HIVE-11825
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Critical
> Fix For: 0.14.1
>
>
> example:
> select attr from raw_kafka_item_dt0 where l_date='2015-09-06' and 
> customer='Czgc_news' and get_json_object(attr,'$.title') is NULL limit 10;
> but in results,title is still not null!
> {"title":"思科Q4收入估$79.2亿 
> 前景阴云笼罩","ItemType":"NewsBase","keywords":"思科Q4收入估\$79.2亿 
> 前景阴云笼罩","random":"1420253511075","callback":"BCore.instances[2].callbacks[1]","user_agent":"Mozilla/5.0
>  (iPhone; U; CPU iPhone OS 4_2_1 like Mac OS X; en-us) AppleWebKit/533.17.9 
> (KHTML; like Gecko) Version/5.0.2 Mobile/8C148 
> Safari/6533.18.5","is_newgid":"false","uuid":"DS.Input:b56c782bcb75035d:2116:003dcd40:54a75947","ptime":"1.1549997E9"}
>  
> attr is a dict



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11825) get_json_object(col,'$.a') is null in where clause didn`t work

2015-09-16 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747226#comment-14747226
 ] 

Feng Yuan commented on HIVE-11825:
--

thank you so much ,it works!

> get_json_object(col,'$.a') is null in where clause didn`t work
> --
>
> Key: HIVE-11825
> URL: https://issues.apache.org/jira/browse/HIVE-11825
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Critical
> Fix For: 0.14.1
>
> Attachments: HIVE-11825.patch
>
>
> example:
> select attr from raw_kafka_item_dt0 where l_date='2015-09-06' and 
> customer='Czgc_news' and get_json_object(attr,'$.title') is NULL limit 10;
> but in results,title is still not null!
> {"title":"思科Q4收入估$79.2亿 
> 前景阴云笼罩","ItemType":"NewsBase","keywords":"思科Q4收入估\$79.2亿 
> 前景阴云笼罩","random":"1420253511075","callback":"BCore.instances[2].callbacks[1]","user_agent":"Mozilla/5.0
>  (iPhone; U; CPU iPhone OS 4_2_1 like Mac OS X; en-us) AppleWebKit/533.17.9 
> (KHTML; like Gecko) Version/5.0.2 Mobile/8C148 
> Safari/6533.18.5","is_newgid":"false","uuid":"DS.Input:b56c782bcb75035d:2116:003dcd40:54a75947","ptime":"1.1549997E9"}
>  
> attr is a dict



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11840) when multi insert the inputformat becomes OneNullRowInputFormat

2015-09-16 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11840:
-
Attachment: single__insert
multi insert

> when multi insert the inputformat becomes OneNullRowInputFormat
> ---
>
> Key: HIVE-11840
> URL: https://issues.apache.org/jira/browse/HIVE-11840
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Critical
> Fix For: 0.14.1
>
> Attachments: multi insert, single__insert
>
>
> example:
> from portrait.rec_feature_feedback a 
> insert overwrite table portrait.test1 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('949722CF_12F7_523A_EE21_E3D591B7E755') 
> insert overwrite table portrait.test2 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('test') 
> insert overwrite table portrait.test3 select iid, feedback_15day, 
> feedback_7day, feedback_5day, feedback_3day, feedback_1day where l_date = 
> '2015-09-09' and bid in ('F7734668_CC49_8C4F_24C5_EA8B6728E394')
> when single insert it works.but multi insert when i select * from test1:
> NULL NULL NULL NULL NULL NULL.
> i see "explain extended"
> Path -> Alias:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} [a]
> -mr-10007portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Czgc_pc, bid=949722CF_12F7_523A_EE21_E3D591B7E755} [a]
>   Path -> Partition:
> -mr-10006portrait.rec_feature_feedback{l_date=2015-09-09, 
> cid=Cyiyaowang, bid=F7734668_CC49_8C4F_24C5_EA8B6728E394} 
>   Partition
> base file name: bid=F7734668_CC49_8C4F_24C5_EA8B6728E394
> input format: org.apache.hadoop.hive.ql.io.OneNullRowInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid F7734668_CC49_8C4F_24C5_EA8B6728E394
>   cid Cyiyaowang
>   l_date 2015-09-09
> but when single insert:
> Path -> Alias:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  [a]
>   Path -> Partition:
> 
> hdfs://bfdhadoopcool/warehouse/portrait.db/rec_feature_feedback/l_date=2015-09-09/cid=Czgc_pc/bid=949722CF_12F7_523A_EE21_E3D591B7E755
>  
>   Partition
> base file name: bid=949722CF_12F7_523A_EE21_E3D591B7E755
> input format: org.apache.hadoop.mapred.TextInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> partition values:
>   bid 949722CF_12F7_523A_EE21_E3D591B7E755
>   cid Czgc_pc
>   l_date 2015-09-09



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11825) get_json_object(col,'$.a') is null in where clause didn`t work

2015-09-15 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746721#comment-14746721
 ] 

Feng Yuan commented on HIVE-11825:
--

thank Cazen Lee,it`s exactly this reason. is it a bug? is there any issue?

> get_json_object(col,'$.a') is null in where clause didn`t work
> --
>
> Key: HIVE-11825
> URL: https://issues.apache.org/jira/browse/HIVE-11825
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Critical
> Fix For: 0.14.1
>
>
> example:
> select attr from raw_kafka_item_dt0 where l_date='2015-09-06' and 
> customer='Czgc_news' and get_json_object(attr,'$.title') is NULL limit 10;
> but in results,title is still not null!
> {"title":"思科Q4收入估$79.2亿 
> 前景阴云笼罩","ItemType":"NewsBase","keywords":"思科Q4收入估\$79.2亿 
> 前景阴云笼罩","random":"1420253511075","callback":"BCore.instances[2].callbacks[1]","user_agent":"Mozilla/5.0
>  (iPhone; U; CPU iPhone OS 4_2_1 like Mac OS X; en-us) AppleWebKit/533.17.9 
> (KHTML; like Gecko) Version/5.0.2 Mobile/8C148 
> Safari/6533.18.5","is_newgid":"false","uuid":"DS.Input:b56c782bcb75035d:2116:003dcd40:54a75947","ptime":"1.1549997E9"}
>  
> attr is a dict



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11825) get_json_object(col,'$.a') is null in where clause didn`t work

2015-09-15 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11825:
-
Description: 
example:
select attr from raw_kafka_item_dt0 where l_date='2015-09-06' and 
customer='Czgc_news' and get_json_object(attr,'$.title') is NULL limit 10;
but in results,title is still not null!
{"title":"思科Q4收入估$79.2亿 
前景阴云笼罩","ItemType":"NewsBase","keywords":"思科Q4收入估\$79.2亿 
前景阴云笼罩","random":"1420253511075","callback":"BCore.instances[2].callbacks[1]","user_agent":"Mozilla/5.0
 (iPhone; U; CPU iPhone OS 4_2_1 like Mac OS X; en-us) AppleWebKit/533.17.9 
(KHTML; like Gecko) Version/5.0.2 Mobile/8C148 
Safari/6533.18.5","is_newgid":"false","uuid":"DS.Input:b56c782bcb75035d:2116:003dcd40:54a75947","ptime":"1.1549997E9"}
 

attr is a dict

> get_json_object(col,'$.a') is null in where clause didn`t work
> --
>
> Key: HIVE-11825
> URL: https://issues.apache.org/jira/browse/HIVE-11825
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
>Priority: Critical
> Fix For: 0.14.1
>
>
> example:
> select attr from raw_kafka_item_dt0 where l_date='2015-09-06' and 
> customer='Czgc_news' and get_json_object(attr,'$.title') is NULL limit 10;
> but in results,title is still not null!
> {"title":"思科Q4收入估$79.2亿 
> 前景阴云笼罩","ItemType":"NewsBase","keywords":"思科Q4收入估\$79.2亿 
> 前景阴云笼罩","random":"1420253511075","callback":"BCore.instances[2].callbacks[1]","user_agent":"Mozilla/5.0
>  (iPhone; U; CPU iPhone OS 4_2_1 like Mac OS X; en-us) AppleWebKit/533.17.9 
> (KHTML; like Gecko) Version/5.0.2 Mobile/8C148 
> Safari/6533.18.5","is_newgid":"false","uuid":"DS.Input:b56c782bcb75035d:2116:003dcd40:54a75947","ptime":"1.1549997E9"}
>  
> attr is a dict



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11400) insert overwrite task awasy stuck at latest Move Operator.

2015-07-29 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11400:
-
Attachment: failed_logs

 insert overwrite task awasy stuck at latest Move Operator.
 --

 Key: HIVE-11400
 URL: https://issues.apache.org/jira/browse/HIVE-11400
 Project: Hive
  Issue Type: Bug
  Components: Hive, Query Processor
Affects Versions: 0.14.0
 Environment: hadoop 2.6.0,centos 6.5
Reporter: Feng Yuan
 Attachments: failed_logs, success_logs, task_explain


 when i run a task like insert overwrite table a (select *  from b join 
 select * from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. 
 the parser explain the task has 3 jobs,but the third job(or stage) will never 
 get executed).
 there have two files:
 1.hql explain file.
 2.running logs.
 you will see the stage-0 in explain file is Move Operation,but you will not 
 see it in the running logs.and the fact is 16 of 17 jobs has 
 complete(actually the 13th job get lost?i dont see anywhere in the logs),but 
 the 17th job get hanged forever and i dont see any words like Loading data 
 to table in logs.
 there are someone can help this?
 Thanks for you very much~!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11400) insert overwrite task awasy stuck at latest Move Operator.

2015-07-29 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11400:
-
Attachment: success_logs

 insert overwrite task awasy stuck at latest Move Operator.
 --

 Key: HIVE-11400
 URL: https://issues.apache.org/jira/browse/HIVE-11400
 Project: Hive
  Issue Type: Bug
  Components: Hive, Query Processor
Affects Versions: 0.14.0
 Environment: hadoop 2.6.0,centos 6.5
Reporter: Feng Yuan
 Attachments: success_logs, task_explain


 when i run a task like insert overwrite table a (select *  from b join 
 select * from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. 
 the parser explain the task has 3 jobs,but the third job(or stage) will never 
 get executed).
 there have two files:
 1.hql explain file.
 2.running logs.
 you will see the stage-0 in explain file is Move Operation,but you will not 
 see it in the running logs.and the fact is 16 of 17 jobs has 
 complete(actually the 13th job get lost?i dont see anywhere in the logs),but 
 the 17th job get hanged forever and i dont see any words like Loading data 
 to table in logs.
 there are someone can help this?
 Thanks for you very much~!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11400) insert overwrite task awasy stuck at latest Move Operator.

2015-07-29 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11400:
-
Attachment: task_explain
running_logs

 insert overwrite task awasy stuck at latest Move Operator.
 --

 Key: HIVE-11400
 URL: https://issues.apache.org/jira/browse/HIVE-11400
 Project: Hive
  Issue Type: Bug
  Components: Hive, Query Processor
Affects Versions: 0.14.0
 Environment: hadoop 2.6.0,centos 6.5
Reporter: Feng Yuan
 Attachments: running_logs, task_explain


 when i run a task like insert overwrite table a (select *  from b join 
 select * from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. 
 the parser explain the task has 3 jobs,but the third job will never get 
 launched! Even not being assgined AppId like job_123XXX).
 there i have two logs:
 1.hql explain.
 2.running logs.
 you will see the stage-0 in explain file is Move Operation,but you will not 
 see it in the running logs.and the fact is 16 of 17 jobs has complete,but the 
 17th job get hanged forever and i Don't know what the reason.
 there are someone can help this?
 Thanks for you very much~!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11400) insert overwrite task awasy stuck at latest Move Operator.

2015-07-29 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11400:
-
Description: 
when i run a task like insert overwrite table a (select *  from b join select 
* from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. the parser 
explain the task has 3 jobs,but the third job(or stage) will never get 
executed).
there have two files:
1.hql explain file.
2.running logs.
you will see the stage-0 in explain file is Move Operation,but you will not see 
it in the running logs.and the fact is 16 of 17 jobs has complete(actually the 
13th job get lost?i dont see anywhere in the logs),but the 17th job get hanged 
forever and i Don't know what the reason.

there are someone can help this?
Thanks for you very much~!

  was:
when i run a task like insert overwrite table a (select *  from b join select 
* from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. the parser 
explain the task has 3 jobs,but the third job will never get launched! Even not 
being assgined AppId like job_123XXX).
there i have two logs:
1.hql explain.
2.running logs.
you will see the stage-0 in explain file is Move Operation,but you will not see 
it in the running logs.and the fact is 16 of 17 jobs has complete,but the 17th 
job get hanged forever and i Don't know what the reason.

there are someone can help this?
Thanks for you very much~!


 insert overwrite task awasy stuck at latest Move Operator.
 --

 Key: HIVE-11400
 URL: https://issues.apache.org/jira/browse/HIVE-11400
 Project: Hive
  Issue Type: Bug
  Components: Hive, Query Processor
Affects Versions: 0.14.0
 Environment: hadoop 2.6.0,centos 6.5
Reporter: Feng Yuan
 Attachments: running_logs, task_explain


 when i run a task like insert overwrite table a (select *  from b join 
 select * from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. 
 the parser explain the task has 3 jobs,but the third job(or stage) will never 
 get executed).
 there have two files:
 1.hql explain file.
 2.running logs.
 you will see the stage-0 in explain file is Move Operation,but you will not 
 see it in the running logs.and the fact is 16 of 17 jobs has 
 complete(actually the 13th job get lost?i dont see anywhere in the logs),but 
 the 17th job get hanged forever and i Don't know what the reason.
 there are someone can help this?
 Thanks for you very much~!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11400) insert overwrite task awasy stuck at latest Move Operator.

2015-07-29 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11400:
-
Description: 
when i run a task like insert overwrite table a (select *  from b join select 
* from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. the parser 
explain the task has 3 jobs,but the third job(or stage) will never get 
executed).
there have two files:
1.hql explain file.
2.running logs.
you will see the stage-0 in explain file is Move Operation,but you will not see 
it in the running logs.and the fact is 16 of 17 jobs has complete(actually the 
13th job get lost?i dont see anywhere in the logs),but the 17th job get hanged 
forever and i dont see any words like Loading data to table in logs.

there are someone can help this?
Thanks for you very much~!

  was:
when i run a task like insert overwrite table a (select *  from b join select 
* from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. the parser 
explain the task has 3 jobs,but the third job(or stage) will never get 
executed).
there have two files:
1.hql explain file.
2.running logs.
you will see the stage-0 in explain file is Move Operation,but you will not see 
it in the running logs.and the fact is 16 of 17 jobs has complete(actually the 
13th job get lost?i dont see anywhere in the logs),but the 17th job get hanged 
forever and i Don't know what the reason.

there are someone can help this?
Thanks for you very much~!


 insert overwrite task awasy stuck at latest Move Operator.
 --

 Key: HIVE-11400
 URL: https://issues.apache.org/jira/browse/HIVE-11400
 Project: Hive
  Issue Type: Bug
  Components: Hive, Query Processor
Affects Versions: 0.14.0
 Environment: hadoop 2.6.0,centos 6.5
Reporter: Feng Yuan
 Attachments: running_logs, task_explain


 when i run a task like insert overwrite table a (select *  from b join 
 select * from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. 
 the parser explain the task has 3 jobs,but the third job(or stage) will never 
 get executed).
 there have two files:
 1.hql explain file.
 2.running logs.
 you will see the stage-0 in explain file is Move Operation,but you will not 
 see it in the running logs.and the fact is 16 of 17 jobs has 
 complete(actually the 13th job get lost?i dont see anywhere in the logs),but 
 the 17th job get hanged forever and i dont see any words like Loading data 
 to table in logs.
 there are someone can help this?
 Thanks for you very much~!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11400) insert overwrite task awasy stuck at latest job

2015-07-29 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11400:
-
Description: 
when i run a task like insert overwrite table a (select *  from b join select 
* from c  on b.id=c.id) tmp; it will get stuck on latest job.(eg. the parser 
explain the task has 3 jobs,but the third job(or stage) will never get 
executed).
there have two files:
1.hql explain file.
2.running logs.
you will see the stage-0 in explain file is Move Operation,but you will not see 
it in the running logs.and the fact is 16 of 17 jobs has complete(actually the 
13th job get lost?i dont see anywhere in the logs),but the 17th job get hanged 
forever.

there are someone can help this?
Thanks for you very much~!

  was:
when i run a task like insert overwrite table a (select *  from b join select 
* from c  on b.id=c.id) tmp; it will get stuck on latest stage.(eg. the parser 
explain the task has 3 jobs,but the third job(or stage) will never get 
executed).
there have two files:
1.hql explain file.
2.running logs.
you will see the stage-0 in explain file is Move Operation,but you will not see 
it in the running logs.and the fact is 16 of 17 jobs has complete(actually the 
13th job get lost?i dont see anywhere in the logs),but the 17th job get hanged 
forever and i dont see any words like Loading data to table in logs.

there are someone can help this?
Thanks for you very much~!

Summary: insert overwrite task awasy stuck at latest job  (was: insert 
overwrite task awasy stuck at latest Move Operator.)

 insert overwrite task awasy stuck at latest job
 ---

 Key: HIVE-11400
 URL: https://issues.apache.org/jira/browse/HIVE-11400
 Project: Hive
  Issue Type: Bug
  Components: Hive, Query Processor
Affects Versions: 0.14.0
 Environment: hadoop 2.6.0,centos 6.5
Reporter: Feng Yuan
 Attachments: failed_logs, success_logs, task_explain


 when i run a task like insert overwrite table a (select *  from b join 
 select * from c  on b.id=c.id) tmp; it will get stuck on latest job.(eg. the 
 parser explain the task has 3 jobs,but the third job(or stage) will never get 
 executed).
 there have two files:
 1.hql explain file.
 2.running logs.
 you will see the stage-0 in explain file is Move Operation,but you will not 
 see it in the running logs.and the fact is 16 of 17 jobs has 
 complete(actually the 13th job get lost?i dont see anywhere in the logs),but 
 the 17th job get hanged forever.
 there are someone can help this?
 Thanks for you very much~!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11400) insert overwrite task awasy stuck at latest job

2015-07-29 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HIVE-11400:
-
Description: 
when i run a task like insert overwrite table a (select *  from b join select 
* from c  on b.id=c.id) tmp; it will get stuck on latest job.(eg. the parser 
explain the task has 3 jobs,but the third job(or stage) will never get 
executed).
there have two files:
1.hql explain file.
2.running logs.
you will see the stage-0 in explain file is Move Operation,but you will not see 
it in the running logs.and the fact is 16 of 17 jobs has complete(actually the 
13th job get lost?i dont see anywhere in the logs),but the 17th job get hanged 
forever.and even it not bean assigned a jobid and launched!

there are someone can help this?
Thanks for you very much~!

  was:
when i run a task like insert overwrite table a (select *  from b join select 
* from c  on b.id=c.id) tmp; it will get stuck on latest job.(eg. the parser 
explain the task has 3 jobs,but the third job(or stage) will never get 
executed).
there have two files:
1.hql explain file.
2.running logs.
you will see the stage-0 in explain file is Move Operation,but you will not see 
it in the running logs.and the fact is 16 of 17 jobs has complete(actually the 
13th job get lost?i dont see anywhere in the logs),but the 17th job get hanged 
forever.

there are someone can help this?
Thanks for you very much~!


 insert overwrite task awasy stuck at latest job
 ---

 Key: HIVE-11400
 URL: https://issues.apache.org/jira/browse/HIVE-11400
 Project: Hive
  Issue Type: Bug
  Components: Hive, Query Processor
Affects Versions: 0.14.0
 Environment: hadoop 2.6.0,centos 6.5
Reporter: Feng Yuan
 Attachments: failed_logs, success_logs, task_explain


 when i run a task like insert overwrite table a (select *  from b join 
 select * from c  on b.id=c.id) tmp; it will get stuck on latest job.(eg. the 
 parser explain the task has 3 jobs,but the third job(or stage) will never get 
 executed).
 there have two files:
 1.hql explain file.
 2.running logs.
 you will see the stage-0 in explain file is Move Operation,but you will not 
 see it in the running logs.and the fact is 16 of 17 jobs has 
 complete(actually the 13th job get lost?i dont see anywhere in the logs),but 
 the 17th job get hanged forever.and even it not bean assigned a jobid and 
 launched!
 there are someone can help this?
 Thanks for you very much~!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)