Re: problem regarding hadoop

2010-01-15 Thread Jean-Daniel Cryans
There seems to be a mismatch between the hbase versions you are using.
In particular, there is a known bug when using hbase 0.20.0 with
0.20.1 and 0.20.2. The best is to just upgrade to 0.20.2

J-D

On Thu, Jan 14, 2010 at 12:11 AM, Muhammad Mudassar
 wrote:
> Basically I am trying to create table in Hbase by using *hbaseAdmin* by
> using a java programe but i am getting trouble however table is created but
> it does not store anything in it when i use *batchUpdate.put* to insert
> anything in it the exception shown in ide is
>
> Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
>        at $Proxy1.getRegionInfo(Unknown Source)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:795)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:465)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:474)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:478)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
>        at
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:159)
>
> Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> java.lang.NoSuchMethodException:
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRow([B)
>        at java.lang.Class.getMethod(Class.java:1605)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:627)
>        at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:912)
>
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:701)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:321)
>        ... 14 more
> Java Result: 1
>
> when i checked logs of hbase master
>
> On Wed, Jan 13, 2010 at 10:37 PM, Jean-Daniel Cryans 
> wrote:
>
>> This is probably a question better for common-user rather than hbase.
>>
>> But to answer your problem, your JobTracker is able to talk to your
>> Namenode but there's something wrong with the Datanode, your should
>> grep its log for any exception.
>>
>> J-D
>>
>> On Wed, Jan 13, 2010 at 3:11 AM, Muhammad Mudassar 
>> wrote:
>> > hi i am running hadoop 0.20.1 on single node and  i am getting some
>> problem
>> > My hdfs-site configurations are
>> > 
>> > 
>> >    dfs.replication
>> >    1
>> >  
>> > 
>> >  hadoop.tmp.dir
>> >  /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop
>> >  A base for other temporary directories.
>> > 
>> > 
>> >
>> >
>> > and core site configurations are
>> > 
>> >  
>> >    fs.default.name
>> >    hdfs://localhost:54310
>> >  
>> > 
>> >  hadoop.tmp.dir
>> >  /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop
>> >  A base for other temporary directories.
>> > 
>> > 
>> >
>> >
>> > the problem is with jobtracker log file says that
>> >
>> > 2010-01-13 16:00:33,015 INFO org.apache.hadoop.mapred.JobTracker:
>> Scheduler
>> > configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>> > limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
>> > 2010-01-13 16:00:33,043 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> > Initializing RPC Metrics with hostName=JobTracker, port=54311
>> > 2010-01-13 16:00:38,309 INFO org.mortbay.log: Logging to
>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> > org.mortbay.log.Slf4jLog
>> > 2010-01-13 16:00:38,407 INFO org.apache.hadoop.http.HttpServer: Port
>> > returned by webServer.getConnectors()[0].getLocalPort() before open() is
>> -1.
>> > Opening the listener on 50030
>> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer:
>> > listener.getLocalPort() returned 50030
>> > webServer.getConnectors()[0].getLocalPort() returned 50030
>> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer: Jetty
>> bound
>> > to port 50030
>> > 2010-01-13 16:00:38,408 INFO org.mortbay.log: jetty-6.1.14
>> > 2010-01-13 16:00:51,429 INFO org.mortbay.log: Started
>> > selectchannelconnec...@0.0.0.0:50030
>> > 2010-01-13 16:00:51,430 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> > Initializing JVM Metrics with processName=JobTracker, sessionId=
>> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker:
>> JobTracker
>> > up at: 54311
>> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.J

Re: problem regarding hadoop

2010-01-14 Thread Muhammad Mudassar
Basically I am trying to create table in Hbase by using *hbaseAdmin* by
using a java programe but i am getting trouble however table is created but
it does not store anything in it when i use *batchUpdate.put* to insert
anything in it the exception shown in ide is

Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at $Proxy1.getRegionInfo(Unknown Source)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:795)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:465)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:474)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:478)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
at
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:159)

Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
java.lang.NoSuchMethodException:
org.apache.hadoop.hbase.regionserver.HRegionServer.getRow([B)
at java.lang.Class.getMethod(Class.java:1605)
at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:627)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:912)

at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:701)
at
org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:321)
... 14 more
Java Result: 1

when i checked logs of hbase master

On Wed, Jan 13, 2010 at 10:37 PM, Jean-Daniel Cryans wrote:

> This is probably a question better for common-user rather than hbase.
>
> But to answer your problem, your JobTracker is able to talk to your
> Namenode but there's something wrong with the Datanode, your should
> grep its log for any exception.
>
> J-D
>
> On Wed, Jan 13, 2010 at 3:11 AM, Muhammad Mudassar 
> wrote:
> > hi i am running hadoop 0.20.1 on single node and  i am getting some
> problem
> > My hdfs-site configurations are
> > 
> > 
> >dfs.replication
> >1
> >  
> > 
> >  hadoop.tmp.dir
> >  /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop
> >  A base for other temporary directories.
> > 
> > 
> >
> >
> > and core site configurations are
> > 
> >  
> >fs.default.name
> >hdfs://localhost:54310
> >  
> > 
> >  hadoop.tmp.dir
> >  /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop
> >  A base for other temporary directories.
> > 
> > 
> >
> >
> > the problem is with jobtracker log file says that
> >
> > 2010-01-13 16:00:33,015 INFO org.apache.hadoop.mapred.JobTracker:
> Scheduler
> > configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> > limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> > 2010-01-13 16:00:33,043 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> > Initializing RPC Metrics with hostName=JobTracker, port=54311
> > 2010-01-13 16:00:38,309 INFO org.mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 2010-01-13 16:00:38,407 INFO org.apache.hadoop.http.HttpServer: Port
> > returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1.
> > Opening the listener on 50030
> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer:
> > listener.getLocalPort() returned 50030
> > webServer.getConnectors()[0].getLocalPort() returned 50030
> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound
> > to port 50030
> > 2010-01-13 16:00:38,408 INFO org.mortbay.log: jetty-6.1.14
> > 2010-01-13 16:00:51,429 INFO org.mortbay.log: Started
> > selectchannelconnec...@0.0.0.0:50030
> > 2010-01-13 16:00:51,430 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> > Initializing JVM Metrics with processName=JobTracker, sessionId=
> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker:
> JobTracker
> > up at: 54311
> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker:
> JobTracker
> > webserver: 50030
> > 2010-01-13 16:00:51,574 INFO org.apache.hadoop.mapred.JobTracker:
> Cleaning
> > up the system directory
> > 2010-01-13 16:00:51,643 INFO
> > org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
> > inactive
> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer
> > Exception: org.apache.hadoop.ipc.RemoteExc

Re: problem regarding hadoop

2010-01-13 Thread Jean-Daniel Cryans
This is probably a question better for common-user rather than hbase.

But to answer your problem, your JobTracker is able to talk to your
Namenode but there's something wrong with the Datanode, your should
grep its log for any exception.

J-D

On Wed, Jan 13, 2010 at 3:11 AM, Muhammad Mudassar  wrote:
> hi i am running hadoop 0.20.1 on single node and  i am getting some problem
> My hdfs-site configurations are
> 
> 
>    dfs.replication
>    1
>  
> 
>  hadoop.tmp.dir
>  /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop
>  A base for other temporary directories.
> 
> 
>
>
> and core site configurations are
> 
>  
>    fs.default.name
>    hdfs://localhost:54310
>  
> 
>  hadoop.tmp.dir
>  /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop
>  A base for other temporary directories.
> 
> 
>
>
> the problem is with jobtracker log file says that
>
> 2010-01-13 16:00:33,015 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
> configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> 2010-01-13 16:00:33,043 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=JobTracker, port=54311
> 2010-01-13 16:00:38,309 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2010-01-13 16:00:38,407 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50030
> 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50030
> webServer.getConnectors()[0].getLocalPort() returned 50030
> 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50030
> 2010-01-13 16:00:38,408 INFO org.mortbay.log: jetty-6.1.14
> 2010-01-13 16:00:51,429 INFO org.mortbay.log: Started
> selectchannelconnec...@0.0.0.0:50030
> 2010-01-13 16:00:51,430 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=JobTracker, sessionId=
> 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
> up at: 54311
> 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
> webserver: 50030
> 2010-01-13 16:00:51,574 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
> up the system directory
> 2010-01-13 16:00:51,643 INFO
> org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
> inactive
> 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>    at org.apache.hadoop.ipc.Client.call(Client.java:739)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>    at $Proxy4.addBlock(Unknown Source)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy4.addBlock(Unknown Source)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>
> 2*010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for block null bad datanode[0] nodes == null
> 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Could not ge