Re: meet error when building hive-2.4.x from source

2017-06-07 Thread Bing Li
Hi,
Please try to build hive-storage-api module in local ahead.
e.g.
cd storage-api
mvn clean install -DskipTests

And then build the whole hive project.

2017-06-05 17:20 GMT+08:00 赵伟 :

> hi!
> First of all,Thank you for your reading my letter.
> I meet a problem when I build 2.4.x branch from source code.
> My building command:mvn clean package -Pdist -e
> Here is the stack trace:
> [INFO] Hive ... SUCCESS [
>  1.955 s]
> [INFO] Hive Shims Common .. SUCCESS [
>  6.070 s]
> [INFO] Hive Shims 0.23  SUCCESS [
>  4.526 s]
> [INFO] Hive Shims Scheduler ... SUCCESS [
>  1.775 s]
> [INFO] Hive Shims . SUCCESS [
>  0.994 s]
> [INFO] Hive Common  SUCCESS [
> 51.173 s]
> [INFO] Hive Service RPC ... SUCCESS [
>  4.936 s]
> [INFO] Hive Serde . FAILURE [
>  0.063 s]
> [INFO] Hive Metastore . SKIPPED
> [INFO] Hive Vector-Code-Gen Utilities . SKIPPED
> [INFO] Hive Llap Common ... SKIPPED
> [INFO] Hive Llap Client ... SKIPPED
> [INFO] Hive Llap Tez .. SKIPPED
> [INFO] Spark Remote Client  SKIPPED
> [INFO] Hive Query Language  SKIPPED
> [INFO] Hive Llap Server ... SKIPPED
> [INFO] Hive Service ... SKIPPED
> [INFO] Hive Accumulo Handler .. SKIPPED
> [INFO] Hive JDBC .. SKIPPED
> [INFO] Hive Beeline ... SKIPPED
> [INFO] Hive CLI ... SKIPPED
> [INFO] Hive Contrib ... SKIPPED
> [INFO] Hive Druid Handler . SKIPPED
> [INFO] Hive HBase Handler . SKIPPED
> [INFO] Hive JDBC Handler .. SKIPPED
> [INFO] Hive HCatalog .. SKIPPED
> [INFO] Hive HCatalog Core . SKIPPED
> [INFO] Hive HCatalog Pig Adapter .. SKIPPED
> [INFO] Hive HCatalog Server Extensions  SKIPPED
> [INFO] Hive HCatalog Webhcat Java Client .. SKIPPED
> [INFO] Hive HCatalog Webhcat .. SKIPPED
> [INFO] Hive HCatalog Streaming  SKIPPED
> [INFO] Hive HPL/SQL ... SKIPPED
> [INFO] Hive Llap External Client .. SKIPPED
> [INFO] Hive Shims Aggregator .. SKIPPED
> [INFO] Hive TestUtils . SKIPPED
> [INFO] Hive Packaging . SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 01:12 min
> [INFO] Finished at: 2017-06-05T17:11:30+08:00
> [INFO] Final Memory: 77M/783M
> [INFO] 
> 
> [ERROR] Failed to execute goal on project hive-serde: Could not resolve
> dependencies for project org.apache.hive:hive-serde:jar:2.3.0: Failure to
> find org.apache.hive:hive-storage-api:jar:2.4.0 in
> http://www.datanucleus.org/downloads/maven2 was cached in the local
> repository, resolution will not be reattempted until the update interval of
> datanucleus has elapsed or updates are forced -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
> goal on project hive-serde: Could not resolve dependencies for project
> org.apache.hive:hive-serde:jar:2.3.0: Failure to find
> org.apache.hive:hive-storage-api:jar:2.4.0 in http://www.datanucleus.org/
> downloads/maven2 was cached in the local repository, resolution will not
> be reattempted until the update interval of datanucleus has elapsed or
> updates are forced
> at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.
> getDependencies(LifecycleDependencyResolver.java:221)
> at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.
> resolveProjectDependencies(LifecycleDependencyResolver.java:127)
> at org.apache.maven.lifecycle.internal.MojoExecutor.
> ensureDependenciesAreResolved(MojoExecutor.java:245)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:199)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:153)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> 

File name conflict when have multiple INSERT INTO queries running in parallel

2016-05-25 Thread Bing Li
Hi, All

We have an application which connect to HiveServer2 via JDBC.
In the application, it executes "INSERT INTO" query to the same table.

If there are a lot of users running the application at the same time. Some
of the INSERT could fail.

>From hadoop log, we knew that the failure was caused by the target file
name already exists.

Have you run into this issue as well?


I already filed it as HIVE-13850
https://issues.apache.org/jira/browse/HIVE-13850


Thank you.
- Bing


Failed to create HiveMetaStoreClient object in proxy user with Kerberos enabled

2015-11-16 Thread Bing Li
Hi,
I wrote a Java client to talk with HiveMetaStore. (Hive 1.2.0)
But found that it can't new a HiveMetaStoreClient object successfully via a
proxy using in Kerberos env.

===
15/10/13 00:14:38 ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at
org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
==

When I debugging on Hive, I found that the error came from open() method in
HiveMetaStoreClient class.

Around line 406,
 transport = UserGroupInformation.*getCurrentUser()*.doAs(new
PrivilegedExceptionAction() {  *//FAILED, because the current
user doesn't have the cridential*

But it will work if I change above line to
 transport = UserGroupInformation.*getCurrentUser().getRealUser()*.doAs(new
PrivilegedExceptionAction() {

*//PASS*
With Google, *I found*
1. DRILL-3413 fixes this error in Drill side
2. HIVE-4984 (hive metastore should not re-use hadoop proxy configuration)
mentioned related things, but the status is still OPEN

*My Questions:*
1. Have you noticed this issue in HiveMetaStoreClient? If yes, will Hive
plan to fix it?
2. Is the simple change (shown like above) in open() method in
HiveMetaStoreClient enough?


Thank you.
- Bing


Fwd: Failed to create HiveMetaStoreClient object in proxy user with Kerberos enabled

2015-11-09 Thread Bing Li
Hi,
I wrote a Java client to talk with HiveMetaStore. (Hive 1.2.0)
But found that it can't new a HiveMetaStoreClient object successfully via a
proxy using in Kerberos env.

===
15/10/13 00:14:38 ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at
org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
==

When I debugging on Hive, I found that the error came from open() method in
HiveMetaStoreClient class.

Around line 406,
 transport = UserGroupInformation.*getCurrentUser()*.doAs(new
PrivilegedExceptionAction() {  *//FAILED, because the current
user doesn't have the cridential*

But it will work if I change above line to
 transport = UserGroupInformation.*getCurrentUser().getRealUser()*.doAs(new
PrivilegedExceptionAction() {

*//PASS*
With Google, *I found*
1. DRILL-3413 fixes this error in Drill side
2. HIVE-4984 (hive metastore should not re-use hadoop proxy configuration)
mentioned related things, but the status is still OPEN

*My Questions:*
1. Have you noticed this issue in HiveMetaStoreClient? If yes, will Hive
plan to fix it?
2. Is the simple change (shown like above) in open() method in
HiveMetaStoreClient enough?


Thank you.
- Bing


Re: hive query error

2013-08-21 Thread Bing Li
By default, hive.log should exist in /tmp/user_name.
Also, it could be set in $HIVE_HOME/conf/hive-log4j.properties and
hive-exec-log4j.properties
- hive.log.dir
- hive.log.file


2013/8/22 闫昆 yankunhad...@gmail.com

 hi all
 when exec hive query throw exception as follow
 I donnot know where is error log I found $HIVE_HOME/ logs not exist

 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks not specified. Estimated from input data size: 3
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Cannot run job locally: Input Size (= 2304882371) is larger than
 hive.exec.mode.local.auto.inputbytes.max (= 134217728)
 Starting Job = job_1377137178318_0001, Tracking URL =
 http://hydra0001:8088/proxy/application_1377137178318_0001/
 Kill Command = /opt/module/hadoop-2.0.0-cdh4.3.0/bin/hadoop job  -kill
 job_1377137178318_0001
 Hadoop job information for Stage-1: number of mappers: 18; number of
 reducers: 3
 2013-08-22 10:07:49,654 Stage-1 map = 0%,  reduce = 0%
 2013-08-22 10:08:05,544 Stage-1 map = 6%,  reduce = 0%
 2013-08-22 10:08:07,289 Stage-1 map = 0%,  reduce = 0%
 2013-08-22 10:08:58,217 Stage-1 map = 28%,  reduce = 0%
 2013-08-22 10:09:07,210 Stage-1 map = 22%,  reduce = 0%
 Ended Job = job_1377137178318_0001 with errors
 Error during job, obtaining debugging information...
 null
 FAILED: Execution Error, return code 2 from
 org.apache.hadoop.hive.ql.exec.MapRedTask
 MapReduce Jobs Launched:
 Job 0: Map: 18  Reduce: 3   HDFS Read: 0 HDFS Write: 0 FAIL
 Total MapReduce CPU Time Spent: 0 msec
 --

 In the Hadoop world, I am just a novice, explore the entire Hadoop
 ecosystem, I hope one day I can contribute their own code

 YanBit
 yankunhad...@gmail.com




Re: No java compiler available exception for HWI

2013-08-20 Thread Bing Li
Hi, Eric et al
Did you resolve this failure?
I'm using Hive-0.11.0, and get the same error when access to HWI via
browser.

I already set the following properties in hive-site.xml
- hive.hwi.listen.host
- hive.hwi.listen.port
- hive.hwi.war.file

And copied two jasper jars into hive/lib:
- jasper-compiler-5.5.23.jar
- jasper-runtime-5.5.23.jar

Thanks,
- Bing


2013/3/30 Eric Chu e...@rocketfuel.com

 Hi,

 I'm running Hive 0.10 and I want to support HWI (besides CLI and HUE).
 When I started HWI I didn't get any error. However, when I went to Hive
 Server Address:/hwi on my browser I saw the error below complaining
 about No Java compiler available. My JAVA_HOME is set to
 /usr/lib/jvm/java-1.6.0-sun-1.6.0.16.

 Besides https://cwiki.apache.org/Hive/hivewebinterface.html, there's not
 much documentation on HWI. I'm wondering if anyone else has seen this or
 has any idea about what's wrong?

 Thanks.

 Eric

 Problem accessing /hwi/. Reason:

 No Java compiler available

 Caused by:

 java.lang.IllegalStateException: No Java compiler available
   at 
 org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225)
   at 
 org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560)
   at 
 org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299)
   at 
 org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315)
   at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327)
   at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126)
   at 
 org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at 
 org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)




Does HiveServer2 support delegation token?

2013-07-23 Thread Bing Li
Hi, all
HiveMetastore supports delegation token.
Does HiveServer2 support it as well? If not, do we have a plan for this?

Besides, on hive wiki
hive.server2.authentication - Authentication mode, default NONE. Options
are NONE, KERBEROS, LDAP and CUSTOM

Will HiveServer2 support PAM which could be configured to use multiple
authentication ways like OS, or LDAP as well?



Thanks,
- Bing


can hive handle concurrent JDBC statements?

2013-04-16 Thread Bing Li
Hi All,


I am writing a java program to run concurrent JDBC statements. But it
failed with:
org.apache.thrift.TApplicationException: execute failed: out of sequence
response


The steps are:
1. open a connection to jdbc:derby://hiveHost:port/commonDb
2. run select statement at the same time:
String sql = select * from  + tableName;
ResultSet rs1 = stmt.executeQuery(sql);
ResultSet rs2 = stmt.executeQuery(sql);
while(rs1.next()  rs2.next())
{
String s1 = rs1.getString(1);
String s2 = rs2.getString(1);
System.out.println(s1+ | +s2);
}


My question is can hive handle concurrent JDBC statements?

Thanks,
- Bing


Got a hadoop server IPC version mismatch ERROR in TestCliDriver avro_joins.q

2013-01-13 Thread Bing Li
Hi, guys
I applied the patches for HIVE-895 ( add SerDe for Avro serialized data
) and HIVE-3273 (Add avro jars into hive execution classpath ) on
Hive-0.9.0.
And then I ran the following command with hadoop-1.0.3 and avro-1.6.3
 ant test -Dtestcase=TestCliDriver -Dqfile=avro_joins.q
-Dtest.silent=false

But I got an ERROR from hadoop in unit test. ( I can ran avro_joins.q
successfully in a real hadoop-1.0.3 cluster).

I found that IPC version 7 is from hadoop 2.x and version 4 is from
hadoop-1.x, but I didn't set hadoop 2.x in any properties files.
Do you know how this happened in unit test?

Thanks,
- Bing

ERROR

[junit] Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC
version 7 cannot communicate with client version 4
[junit]  at org.apache.hadoop.ipc.Client.call(Client.java:740)
[junit]  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
[junit]  at $Proxy1.getProtocolVersion(Unknown Source)
[junit]  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
[junit]  at
org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
[junit]  at
org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:207)
[junit]  at
org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:170)
[junit]  at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
[junit]  at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
[junit]  at
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
[junit]  at
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
[junit]  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
[junit]  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
[junit]  at
org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:367)
[junit]  ... 10 more
[junit] Job Submission failed with exception
'java.lang.RuntimeException(org.apache.hadoop.ipc.RemoteException: Server
IPC version 7 cannot communicate with client version 4)'


Re: How to set an empty value to hive.querylog.location to disable the creation of hive history file

2012-12-07 Thread Bing Li
do you mean NOT support disable the creation of hive history files OR NOT
support using an empty string to achieve this?

If Hive doesn't support disable the creation of query logs, do you know the
reason?

Thanks,
- Bing

2012/12/6 Hezhiqiang (Ransom) ransom.hezhiqi...@huawei.com

  It’s not supported now. 

 I think you a rise it in JIRA.

 ** **

 Regards

 Ransom

 ** **

 *From:* Bing Li [mailto:sarah.lib...@gmail.com]
 *Sent:* Thursday, December 06, 2012 5:06 PM
 *To:* user@hive.apache.org
 *Subject:* Re: How to set an empty value to hive.querylog.location to
 disable the creation of hive history file

 ** **

 it will exit with error like

 FAILED: Failed to open Query Log: /dev/null/hive_job_log_xxx.txt

 and pointed that the path is not a directory.



 

 2012/12/6 Jithendranath Joijoide pixelma...@gmail.com

 How about setting it to /dev/null . Not sure if that would help in your
 case. Just an hack.

 ** **

 Regards.

 ** **

 On Thu, Dec 6, 2012 at 2:14 PM, Bing Li sarah.lib...@gmail.com wrote:***
 *

 Hi, all
 Refer to https://cwiki.apache.org/Hive/adminmanual-configuration.html, if
 I set hive.querylog.location to an empty string, it won't create
 structured log.

 I filed hive-site.xml in HIVE_HOME/conf and add the following setting,
 property
   namehive.querylog.location/name
   value/value
 /property

 BUT it didn't work, when launch HIVE_HOME/bin/hive, it created a history
 file in /tmp/user.name which is the default directory of this property.

 Do you know how to set an EMPTY value in hive-site.xml?


 Thanks,
 - Bing

 ** **

 ** **



Re: hive-site.xml not found on classpath

2012-11-30 Thread Bing Li
which version of hive do you use?

Could you try to add the following debug line in bin/hive before hive real
executes, and see the result?

*echo CLASSPATH=$CLASSPATH*

if [ $TORUN =  ]; then
   echo Service $SERVICE not found
   echo Available Services: $SERVICE_LIST
   exit 7
else
   $TORUN $@
fi

The version I used is 0.9.0


2012/11/30 Stephen Boesch java...@gmail.com

 Yes i do mean the log is in the wrong location, since it was set to a
 persistent path in the $HIVE_CONF_DIR/lhive-log4j.properties.

 None of the files in that directory appear to be picked up properly:
 neither the hive-site.xml nor log4j.properties.

 I have put echo statements into the 'hive and the hive-config.sh  shell
 scripts and the echo statements prove that  HIVE_CONF_DIR is set properly:
  /shared/hive/conf

 But even so the following problems occur:

- the message hive-site.xml is not found in the classpath
- none of the hive-site.xml values are taking properly
- the log4j.properties in that same directory is not taking effect.




 2012/11/29 Bing Li sarah.lib...@gmail.com

 Hi, Stephen
 what did you mean the wrong place under /tmp in
 I am seeing the following message in the logs (which are in the wrong
 place under /tmp..) ?

 Did you mean that you set a different log dir but it didn't work?

 the log dir should be set in conf/hive-log4j.properties,
 conf/hive-exec-log4j.properties
 and you can try to reset HIVE_CONF_DIR in conf/hive-env.sh with ‘export
 command.

 - Bing


 2012/11/30 Stephen Boesch java...@gmail.com

 thought i mentioned in the posts those were already set and verified..
 but yes in any case that's first thing looked at.

 steve@mithril:~$ echo $HIVE_CONF_DIR
 /shared/hive/conf
 steve@mithril:~$ echo $HIVE_HOME
 /shared/hive


 2012/11/29 kulkarni.swar...@gmail.com kulkarni.swar...@gmail.com

 Have you tried setting HIVE_HOME and HIVE_CONF_DIR?


 On Thu, Nov 29, 2012 at 2:46 PM, Stephen Boesch java...@gmail.comwrote:

 Yes.


 2012/11/29 Shreepadma Venugopalan shreepa...@cloudera.com

 Are you seeing this message when your bring up the standalone hive
 cli by running 'hive'?


 On Thu, Nov 29, 2012 at 12:56 AM, Stephen Boesch 
 java...@gmail.comwrote:

 i am running under user steve.  the latest log (where this shows up
 ) is  /tmp/steve/hive.log


 2012/11/29 Viral Bajaria viral.baja...@gmail.com

 You are seeing this error when you run the hive cli or in the
 tasktracker logs when you run a query ?

 On Thu, Nov 29, 2012 at 12:42 AM, Stephen Boesch java...@gmail.com
  wrote:


 I am seeing the following message in the logs (which are in the
 wrong place under /tmp..)

  hive-site.xml not found on classpath

 My hive-site.xml is under the standard location  $HIVE_HOME/conf
 so this should not happen.

 Now some posts have talked that the HADOOP_CLASSPATH was mangled.
  Mine is not..

 So what is the underlying issue here?

 Thanks

 stephenb








 --
 Swarnim







Re: hive-site.xml not found on classpath

2012-11-29 Thread Bing Li
Hi, Stephen
what did you mean the wrong place under /tmp in
I am seeing the following message in the logs (which are in the wrong
place under /tmp..) ?

Did you mean that you set a different log dir but it didn't work?

the log dir should be set in conf/hive-log4j.properties,
conf/hive-exec-log4j.properties
and you can try to reset HIVE_CONF_DIR in conf/hive-env.sh with ‘export
command.

- Bing

2012/11/30 Stephen Boesch java...@gmail.com

 thought i mentioned in the posts those were already set and verified.. but
 yes in any case that's first thing looked at.

 steve@mithril:~$ echo $HIVE_CONF_DIR
 /shared/hive/conf
 steve@mithril:~$ echo $HIVE_HOME
 /shared/hive


 2012/11/29 kulkarni.swar...@gmail.com kulkarni.swar...@gmail.com

 Have you tried setting HIVE_HOME and HIVE_CONF_DIR?


 On Thu, Nov 29, 2012 at 2:46 PM, Stephen Boesch java...@gmail.comwrote:

 Yes.


 2012/11/29 Shreepadma Venugopalan shreepa...@cloudera.com

 Are you seeing this message when your bring up the standalone hive cli
 by running 'hive'?


 On Thu, Nov 29, 2012 at 12:56 AM, Stephen Boesch java...@gmail.comwrote:

 i am running under user steve.  the latest log (where this shows up )
 is  /tmp/steve/hive.log


 2012/11/29 Viral Bajaria viral.baja...@gmail.com

 You are seeing this error when you run the hive cli or in the
 tasktracker logs when you run a query ?

 On Thu, Nov 29, 2012 at 12:42 AM, Stephen Boesch 
 java...@gmail.comwrote:


 I am seeing the following message in the logs (which are in the
 wrong place under /tmp..)

  hive-site.xml not found on classpath

 My hive-site.xml is under the standard location  $HIVE_HOME/conf so
 this should not happen.

 Now some posts have talked that the HADOOP_CLASSPATH was mangled.
  Mine is not..

 So what is the underlying issue here?

 Thanks

 stephenb








 --
 Swarnim





Can hive-0.8.0 could work with HBase-0.9.2 and Zookeeper-3.4.2?

2012-04-13 Thread Bing Li
Hi, guys
I ran Hive-0.8.0 UT with
- Hadoop-1.0.0 ( applied patches for HIVE-2631 and HIVE-2629)
- HBase-0.92.0
- Zookeeper-3.4.2

But got the following error message:

 [echo] Project: hbase-handler
[junit] Running org.apache.hadoop.hive.cli.TestHBaseCliDriver
[junit] org.apache.hadoop.hbase.client.NoServerForRegionException:
Unable to find region for  after 10 tries.
[junit] at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:908)
[junit] at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:814)
[junit] at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)
[junit] at
org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)
[junit] at org.apache.hadoop.hbase.client.HTable.init(HTable.java:213)
[junit] at
org.apache.hadoop.hive.hbase.HBaseTestSetup.setUpFixtures(HBaseTestSetup.java:95)
[junit] at
org.apache.hadoop.hive.hbase.HBaseTestSetup.preTest(HBaseTestSetup.java:61)
[junit] at
org.apache.hadoop.hive.hbase.HBaseQTestUtil.init(HBaseQTestUtil.java:31)
[junit] at
org.apache.hadoop.hive.cli.TestHBaseCliDriver.setUp(TestHBaseCliDriver.java:43)
[junit] at junit.framework.TestCase.runBare(TestCase.java:132)
[junit] at junit.framework.TestResult$1.protect(TestResult.java:110)
[junit] at junit.framework.TestResult.runProtected(TestResult.java:128)
[junit] at junit.framework.TestResult.run(TestResult.java:113)
[junit] at junit.framework.TestCase.run(TestCase.java:124)
[junit] at junit.framework.TestSuite.runTest(TestSuite.java:243)
[junit] at junit.framework.TestSuite.run(TestSuite.java:238)
[junit] at
junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
[junit] at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
[junit] at junit.framework.TestResult.runProtected(TestResult.java:128)
[junit] at junit.extensions.TestSetup.run(TestSetup.java:27)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
[junit] Exception: Unable to find region for  after 10 tries.


*The failed cases are*
- TestHBaseCliDriver
- TestHBaseMinimrCliDriver
- TestHBaseSerDe

*Referred to HIVE-2748*, I
- Add jackson-mapper-asl and jackson-core-asl to Hive classpath to resolve
No Class Found issue
- Modify HBaseTestSetup.java
   //conf.set(hbase.master,
hbaseCluster.getHMasterAddress().toString());
conf.set(hbase.master,
hbaseCluster.getMaster().getServerName().getHostAndPort());

*But they doesn't work for the failed cases.*
*
*
*
*
*I noticed that in the latest patch for HIVE-2748, it includes some new
classes related to Thrift and test case for Zookeeper, Are they necessary
for Hive-0.8.0 to work with HBase-0.92.0  Zookeeper-3.4.2?*
*Are the failed cases ONLY caused by test cases themselves? *
*
*
*
*
Thanks,
- Sarah


Re: Can hive-0.8.0 could work with HBase-0.9.2 and Zookeeper-3.4.2?

2012-04-13 Thread Bing Li
Are the failures ONLY caused by Test case?

在 2012年4月14日 上午1:14,Bing Li sarah.lib...@gmail.com写道:

 thanks, Andes


 2012/4/13 ylyy-1985 ylyy-1...@163.com

 **
 I would tell that system works with : hadoop0.20,hbase-0.90.3,hibve-0.8.1
 and zookeeper-3.3.3. good luck

 2012-04-13
  --
  **
 Best Regards
 Andes

 Email:ylyy-1...@163.com
 **
  --
  *发件人:*Bing Li
 *发送时间:*2012-04-13 18:35
 *主题:*Can hive-0.8.0 could work with HBase-0.9.2 and Zookeeper-3.4.2?
 *收件人:*devd...@hive.apache.org,useruser@hive.apache.org
 *抄送:*

 Hi, guys
 I ran Hive-0.8.0 UT with
 - Hadoop-1.0.0 ( applied patches for HIVE-2631 and HIVE-2629)
 - HBase-0.92.0
 - Zookeeper-3.4.2

 But got the following error message:

   [echo] Project: hbase-handler
 [junit] Running org.apache.hadoop.hive.cli.TestHBaseCliDriver
 [junit] org.apache.hadoop.hbase.client.NoServerForRegionException:
 Unable to find region for  after 10 tries.
 [junit] at
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:908)
 [junit] at
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:814)
 [junit] at
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)
 [junit] at
 org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)
 [junit] at
 org.apache.hadoop.hbase.client.HTable.init(HTable.java:213)
 [junit] at
 org.apache.hadoop.hive.hbase.HBaseTestSetup.setUpFixtures(HBaseTestSetup.java:95)
 [junit] at
 org.apache.hadoop.hive.hbase.HBaseTestSetup.preTest(HBaseTestSetup.java:61)
 [junit] at
 org.apache.hadoop.hive.hbase.HBaseQTestUtil.init(HBaseQTestUtil.java:31)
 [junit] at
 org.apache.hadoop.hive.cli.TestHBaseCliDriver.setUp(TestHBaseCliDriver.java:43)
 [junit] at junit.framework.TestCase.runBare(TestCase.java:132)
 [junit] at junit.framework.TestResult$1.protect(TestResult.java:110)
 [junit] at
 junit.framework.TestResult.runProtected(TestResult.java:128)
 [junit] at junit.framework.TestResult.run(TestResult.java:113)
 [junit] at junit.framework.TestCase.run(TestCase.java:124)
 [junit] at junit.framework.TestSuite.runTest(TestSuite.java:243)
 [junit] at junit.framework.TestSuite.run(TestSuite.java:238)
 [junit] at
 junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
 [junit] at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
 [junit] at
 junit.framework.TestResult.runProtected(TestResult.java:128)
 [junit] at junit.extensions.TestSetup.run(TestSetup.java:27)
 [junit] at
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
 [junit] at
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
 [junit] at
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
 [junit] Exception: Unable to find region for  after 10 tries.


 *The failed cases are*
 - TestHBaseCliDriver
 - TestHBaseMinimrCliDriver
 - TestHBaseSerDe

 *Referred to HIVE-2748*, I
 - Add jackson-mapper-asl and jackson-core-asl to Hive classpath to
 resolve No Class Found issue
 - Modify HBaseTestSetup.java
//conf.set(hbase.master,
 hbaseCluster.getHMasterAddress().toString());
 conf.set(hbase.master,
 hbaseCluster.getMaster().getServerName().getHostAndPort());

 *But they doesn't work for the failed cases.*
 *
 *
 *
 *
 *I noticed that in the latest patch for HIVE-2748, it includes some new
 classes related to Thrift and test case for Zookeeper, Are they necessary
 for Hive-0.8.0 to work with HBase-0.92.0  Zookeeper-3.4.2?*
 *Are the failed cases ONLY caused by test cases themselves? *
 *
 *
 *
 *
 Thanks,
 - Sarah





Cannot create an instance of InputFormat

2012-01-17 Thread Bing Li
My Steps:
I define a class public class myInputFormat extends TextInputFormat
implements JobConfigurable to specify input format.

hive add jar /home/biadmin/hiveudf/myFileFormat.jar;
Added /home/biadmin/hiveudf/myFileFormat.jar to class path
Added resource: /home/biadmin/hiveudf/myFileFormat.jar

hive list jars;
/home/biadmin/hiveudf/myFileFormat.jar

hive create table IOtable(str1 string, str2 string, str3 string) stored as
INPUTFORMAT 'com.mytest.fileformat.myInputFormat' OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat' ;
OK
Time taken: 0.081 seconds

hive load data local inpath '/home/biadmin/hivetbl/IOtable_data.txt' into
table IOtable;
Copying data from file:/home/biadmin/hivetbl/IOtable_data.txt
Copying file: file:/home/biadmin/hivetbl/IOtable_data.txt
Loading data to table default.iotable
OK
Time taken: 0.147 seconds

hive  select * from IOtable;
OK
Failed with exception java.io.IOException:java.io.IOException: Cannot
create an instance of InputFormat class com.mytest.fileformat.myInputFormat
as specified in mapredWork!
Time taken: 0.059 seconds




*Here is my source code :*
===
package com.mytest.fileformat;

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.mapred.FileSplit;
import org.apache.hadoop.mapred.InputSplit;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.JobConfigurable;
import org.apache.hadoop.mapred.LineRecordReader;
import org.apache.hadoop.mapred.RecordReader;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.InputFormat;
import org.apache.hadoop.mapred.TextInputFormat;

@SuppressWarnings(deprecation)
public class myInputFormat extends TextInputFormat implements
JobConfigurable {
 TextInputFormat format;
JobConf job;

public myInputFormat() {
format = new TextInputFormat();
}

 @Override
public void configure(JobConf job) {
this.job = job;
format.configure(job);
}
public RecordReaderLongWritable, Text getRecordReader(
InputSplit genericSplit, JobConf job, Reporter reporter)
throws IOException {

reporter.setStatus(genericSplit.toString());
return new myLineRecordReader(job, (FileSplit) genericSplit);
}


public static class myLineRecordReader implements
RecordReaderLongWritable, Text {
  LineRecordReader lineReader;
  LongWritable lineKey;
  Text lineValue;

  public myLineRecordReader(JobConf job, FileSplit split) throws
IOException {
lineReader = new LineRecordReader(job, split);
lineKey = lineReader.createKey();
lineValue = lineReader.createValue();
  }

  public boolean next(LongWritable key, Text value) throws IOException {
while (lineReader.next(lineKey, lineValue)) {
  String strReplace = lineValue.toString().toLowerCase().replace(
 , \001 );
  Text txtReplace = new Text();
  txtReplace.set(strReplace);
  value.set(txtReplace.getBytes(), 0, txtReplace.getLength());
  return true ;
 }
 // no more data
 return false;
  }  /** end next **/


  public LongWritable createKey() {
return lineReader.createKey();
  }
  public Text createValue() {
return lineReader.createValue();
  }
  public long getPos() throws IOException{
return lineReader.getPos();
  }
  public float getProgress() throws IOException{
return lineReader.getProgress();
  }
  public void close() throws IOException{
lineReader.close();
  }
 }  /** end class myLineRecordReader **/
}


Hive JOIN fails if SELECT statement contains fields from the first table.

2012-01-16 Thread Bing Li
1. I create two Hive table:
Hive CREATE EXTERNAL TABLE student_details (studentid INT,studentname
STRING,age INT,gpa FLOAT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE LOCATION  ‘/home/biadmin/hivetbl';

HiveCREATE EXTERNAL TABLE student_score(studentid INT, classid INT,score
FLOAT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE
LOCATION '/home/biadmin/hivetbl';

2. Load data
HIVELOAD DATA LOCAL INPATH '/home/biadmin/hivetbl/student_details.txt'
OVERWRITE INTO TABLE student_details;

HIVELOAD DATA LOCAL INPATH '/home/biadmin/hivetbl/student_score.txt'
OVERWRITE INTO TABLE student_score;

3. Run inner join
Hive SELECT a.studentid,a.studentname,a.age,b.classid,b.score,c.classname
FROM student_details a JOIN student_score b ON (a.studentid = b.studentid);

Result:
There are the following exception:
cannot find field studentname from [0:studentid, 1:classid, 2:score]

[My Question]: studentname is a field of the table student_details (The
first table), why search it in the table student_score(the second table)?

log is like that;
... ...
2012-01-15 23:24:41,727 INFO org.apache.hadoop.mapred.TaskInProgress: Error
from attempt_201201152221_0014_m_00_3: java.lang.RuntimeException:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row {studentid:106,classid:null,score:635.0}
at
org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime
Error while processing row {studentid:106,classid:null,score:635.0}
at
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:550)
at
org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
... 4 more
Caused by: java.lang.RuntimeException: cannot find field studentname from
[0:studentid, 1:classid, 2:score]
at
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:345)
at
org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.getStructFieldRef(LazySimpleStructObjectInspector.java:168)
at
org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:57)
at
org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:896)
at
org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:922)
at
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:200)
at
org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at
org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at
org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at
org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:531)
... 5 more


Does hive REAL enable TestHadoop20SAuthBridge in hive-0.8.0 ? -- [HIVE-2257] patch doesn't work for me

2011-12-22 Thread Bing Li
Hi, All
When I ran hive UT, I found that TestHadoop20SAuthBridge wasn't compiled,
so TestHadoop20SAuthBridge won't be run by ant test command.

In src/shims/build.xml, I found the following lines:

  target name=compile-test depends=compile
echo message=Project: ${ant.project.name}/
!-- TODO: move tests to version directory --
!--antcall target=compile_secure_test inheritRefs=false
inheritAll=false
  param name=hadoop.version.ant-internal
value=${hadoop.security.version} /
  param name=hadoop.version.ant-internal.prefix
value=${hadoop.security.version.prefix} /
/antcall--
  /target

Then, I commented off lines in blue, and it could generate the class file
of TestHadoop20SAuthBridge.
But if I change the security hadoop version to 1.0.0, it failed with:

build_shims:
 [echo] Project: shims
 [echo] Compiling shims against hadoop 1.0.1-SNAPSHOT
(/home/libing/Round-1/hive-0.8.0/src/build/hadoopcore/IHC-1.0.1-SNAPSHOT)

BUILD FAILED
/home/libing/Round-1/hive-0.8.0/src/build.xml:307: The following error
occurred while executing this line:
/home/libing/Round-1/hive-0.8.0/src/build.xml:325: The following error
occurred while executing this line:
/home/libing/Round-1/hive-0.8.0/src/shims/build.xml:76: The following error
occurred while executing this line:
/home/libing/Round-1/hive-0.8.0/src/shims/build.xml:66: srcdir
/home/libing/Round-1/hive-0.8.0/src/shims/src/1.0/java does not exist!


Does it mean that if we want to use a hadoop as hadoop.security.version, we
should keep a directory in shims/src/xxx by ourselves as well?


Thanks,
- Bing


FW: a potential bug in HIVE/HADOOP ? -- MetaStore, createDatabase()

2011-12-14 Thread Bing Li

fyi
--- 11年12月14日,周三, Bing Li lib...@yahoo.com.cn 写道:

发件人: Bing Li lib...@yahoo.com.cn
主题: a potential bug in HIVE/HADOOP ? -- MetaStore, createDatabase()
收件人: hive dev list d...@hive.apache.org
日期: 2011年12月14日,周三,下午8:32

Hi, developers
When I ran Hive UT with the candidate build of Hive-0.8.0, I found that 
TestEmbeddedHiveMetaStore and TestRemoteHiveMetaStore always FAILED with ROOT 
account while PASS with NON-ROOT account.

I took a look at the source code of TestHiveMetaStore, and found that 

  fs.mkdirs(
  new Path(HiveConf.getVar(hiveConf, 
HiveConf.ConfVars.METASTOREWAREHOUSE) + /test),
  new FsPermission((short) 0));

 client.createDatabase(db);   // always create the db with ROOT 

Does HIVE UT only support NON-ROOT account? Otherwise, I think it maybe a 
potential defect/bug in HADOOP/HIVE.


Thanks,
- Bing


How to make hive handle/show Chinese words

2011-11-02 Thread Bing Li
Hi, guysI want to load some data files which including Chinese words.Currently, 
I found Hive can't display it well.Is there some setting/properties that I can 
configure to resolve this?

Thanks,Bing

Does hive support running on an existing NFS

2011-10-31 Thread Bing Li
When I distribute Hive to a NFS and execute a select command, it failed:
hive SELECT a.foo FROM invites a;Total MapReduce jobs = 1Launching Job 1 out 
of 1Number of reduce tasks is set to 0 since there's no reduce operatorStarting 
Job = job_201110310722_0001, Tracking URL = 
http://localhost:50030/jobdetails.jsp?jobid=job_201110310722_0001Kill Command = 
/home/libing/hadoop/bin/../bin/hadoop job  -Dmapred.job.trac                    
                                                                        
ker=localhost:9001 -kill job_201110310722_00012011-10-31 07:30:57,717 Stage-1 
map = 100%,  reduce = 100%Ended Job = job_201110310722_0001 with errorsFAILED: 
Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRe

Then I took a look at the log of hadoop:2011-10-31 07:30:54,390 INFO 
org.apache.hadoop.mapred.TaskInProgress: Error from 
attempt_201110310722_0001_m_02_3: java.io.FileNotFoundException: File 
/tmp/hive-biadmin/hive_2011-10-31_07-30-05_610_9164994782186337826/-mr-10003/990312dc-3241-4cc7-b9f6-018beb1739bb
 does not exist.        at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361) 
       at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)  
      at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:163)

I'm not sure if it's a defect in Hive or in Hadoop. The hadoop version I used 
is 0.20.2

Hive runtime error while run TestCliDriver auto_join21.q

2011-10-25 Thread Bing Li
Hi, Guys
I met an error which is similar like described in HIVE-1478, but not exactly 
the same when run auto_join21.q in TestCliDriver.
Do you have some ideas on this?

== Re-Produce ==
Hive: 0.7.1
ANT: 1.8.2
Hadoop: 0.20.2
command: ant test -Dtestcase=TestCliDriver -Dqfile=auto_join21.q

    [junit] 11/10/25 10:03:57 INFO persistence.HashMapWrapper: maximum memory: 
1048576000
    [junit] 11/10/25 10:03:57 INFO persistence.HashMapWrapper: maximum memory: 
1048576000
    [junit] 11/10/25 10:03:57 INFO exec.MapJoinOperator: Initialization Done 1 
MAPJOIN
    [junit] 11/10/25 10:03:57 INFO exec.HashTableDummyOperator: Initialization 
Done 5 HASHTABLEDUMMY
    [junit] 11/10/25 10:03:57 INFO exec.MapOperator: Processing path 
pfile:/root/libing/N20111024_1337/hive-0.7.1-ibm/src/build/ql/test/data/warehouse/src/kv1.txt
    [junit] 11/10/25 10:03:57 INFO exec.MapOperator: Processing alias src1 for 
file 
pfile:/root/libing/N20111024_1337/hive-0.7.1-ibm/src/build/ql/test/data/warehouse/src
    [junit] 11/10/25 10:03:57 INFO exec.MapJoinOperator: *** Load from 
HashTable File: input : 
pfile:/root/libing/N20111024_1337/hive-0.7.1-ibm/src/build/ql/test/data/warehouse/src/kv1.txt
    [junit] 11/10/25 10:03:57 INFO exec.MapJoinOperator:    Load back 1 
hashtable file from tmp file 
uri:file:/tmp/root/hive_2011-10-25_10-03-53_617_6340495404173422880/-local-10003/HashTable-Stage-5/MapJoin-1--.hashtable
    [junit] 11/10/25 10:03:57 WARN lazybinary.LazyBinaryStruct: Extra bytes 
detected at the end of the row! Ignoring similar problems.
    [junit] 11/10/25 10:03:57 INFO exec.MapJoinOperator:    Load back 1 
hashtable file from tmp file 
uri:file:/tmp/root/hive_2011-10-25_10-03-53_617_6340495404173422880/-local-10003/HashTable-Stage-5/MapJoin-2--.hashtable
    [junit] 11/10/25 10:03:57 WARN lazybinary.LazyBinaryStruct: Extra bytes 
detected at the end of the row! Ignoring similar problems.
    [junit] 11/10/25 10:03:57 INFO exec.MapOperator: 12 forwarding 1 rows
    [junit] 11/10/25 10:03:57 INFO exec.TableScanOperator: 0 forwarding 1 rows
    [junit] 11/10/25 10:03:57 FATAL ExecMapper: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row {key:238,value:val_238}
    [junit] at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546)
    [junit] at 
org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
    [junit] at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    [junit] at 
org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
    [junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
    [junit] at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
    [junit] Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text 
incompatible with org.apache.hadoop.io.BooleanWritable
    [junit] at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.joinObjectsRightOuterJoin(CommonJoinOperator.java:489)
    [junit] at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.joinObjects(CommonJoinOperator.java:639)
    [junit] at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:681)
    [junit] at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:685)
    [junit] at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:685)
    [junit] at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:845)
    [junit] at 
org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:264)
    [junit] at 
org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
    [junit] at 
org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
    [junit] at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:78)
    [junit] at 
org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
    [junit] at 
org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
    [junit] at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:527)
    [junit] ... 5 more
    [junit]
    [junit] 11/10/25 10:03:57 INFO exec.MapOperator: 12 finished. closing...
    [junit] 11/10/25 10:03:57 INFO exec.MapOperator: 12 forwarded 1 rows
    [junit] 11/10/25 10:03:57 INFO exec.MapOperator: DESERIALIZE_ERRORS:0
    [junit] 11/10/25 10:03:57 INFO exec.TableScanOperator: 0 finished. 
closing...
    [junit] 11/10/25 10:03:57 INFO exec.TableScanOperator: 0 forwarded 1 rows
    [junit] 11/10/25 10:03:57 INFO exec.TableScanOperator: 0 Close done
    [junit] 11/10/25 10:03:57 INFO exec.MapOperator: 12 Close done
    [junit] 11/10/25 10:03:57 INFO exec.HashTableDummyOperator: 4 finished. 
closing...
    [junit] 11/10/25 10:03:57 INFO exec.MapOperator: 12 Close done
    [junit] 11/10/25 10:03:57 INFO