Hello,

I am trying to run a hadoop job locally through eclipse run configuration for 
the data which is on cluster.

I am running into the following error:

[2013-06-03 10:21:49,031] [WARN] [main] org.apache.hadoop.hbase.client.HTable - 
This constructor HTable(byte[]) is deprecated and it will be removed. Please 
use HTable(Configuration, byte[]) to construct an HTable

[2013-06-03 10:21:49,037] [INFO] [main] 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat - Looking up current 
regions for table org.apache.hadoop.hbase.client.HTable@38178991

[2013-06-03 10:21:49,069] [INFO] [main] 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat - Configuring 30 reduce 
partitions to match current region count

[2013-06-03 10:21:49,070] [INFO] [main] 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat - Writing partition 
information to hdfs://retur/user/a-user/partitions_123248-1412323

[2013-06-03 10:21:49,106] [INFO] [main] org.apache.hadoop.io.compress.CodecPool 
- Got brand-new compressor [.deflate]

[2013-06-03 10:21:49,455] [INFO] [main] 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat - Incremental table output 
configured.

[2013-06-03 10:21:49,626] [ERROR] [Thread-25] 
org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException 
as:a-user (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. 
Please check your configuration for mapreduce.framework.name and the correspond 
server addresses.

[2013-06-03 10:21:49,627] [INFO] [Thread-25] 
org.apache.crunch.impl.mr.exec.CrunchJob - java.io.IOException: Cannot 
initialize Cluster. Please check your configuration for 
mapreduce.framework.name and the correspond server addresses.

at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)

at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)

at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)

at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1188)

at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1184)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

at org.apache.hadoop.mapreduce.Job.connect(Job.java:1183)

at org.apache.hadoop.mapreduce.Job.submit(Job.java:1212)

at 
org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.submit(CrunchControlledJob.java:331)

at org.apache.crunch.impl.mr.exec.CrunchJob.submit(CrunchJob.java:142)

at 
org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:251)

at 
org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.run(CrunchJobControl.java:279)

at java.lang.Thread.run(Thread.java:680)


I did try adding core-site.xml, hdfs-site.xml and mapred-site.xml to my path 
locally but that does not seem to work.

Note: The jar works perfectly fine when uploaded and run on the cluster but 
fails locally.

As a part of the job we write the  the Hbase puts to HFile on a particular 
location and then read it and upload it to the hbase table.

Can you guys point out what I am missing over here ? I think it should be 
something to do with hadoop class path but I am not able to figure out.

Thank You in advance.

Rachit Soni

CONFIDENTIALITY NOTICE This message and any included attachments are from 
Cerner Corporation and are intended only for the addressee. The information 
contained in this message is confidential and may constitute inside or 
non-public information under international, federal, or state securities laws. 
Unauthorized forwarding, printing, copying, distribution, or use of such 
information is strictly prohibited and may be unlawful. If you are not the 
addressee, please promptly delete this message and notify the sender of the 
delivery error by e-mail or you may call Cerner's corporate offices in Kansas 
City, Missouri, U.S.A at (+1) (816)221-1024.

Reply via email to