Just to follow up on this(so if someone searches and hits this post),
This was solved now.  Basically, I found hadoop dfs examples, found out
it was hadoop and DID need to fix /etc/hosts AND needed to move some
info from hdfs-site.xml to core-site.xml.  Once the hadoop example
worked, the hbase examples started working.

Thanks,
Dean

-----Original Message-----
From: Buttler, David [mailto:buttl...@llnl.gov] 
Sent: Wednesday, December 08, 2010 2:50 PM
To: user@hbase.apache.org
Subject: RE: hbase/hadoop config(do I need to change
/etc/hosts???)-fixed now but using local file system..grrrr

Are you sure you have the distributed option set to true?
Dave


-----Original Message-----
From: Hiller, Dean (Contractor) [mailto:dean.hil...@broadridge.com] 
Sent: Wednesday, December 08, 2010 1:15 PM
To: user@hbase.apache.org
Subject: RE: hbase/hadoop config(do I need to change
/etc/hosts???)-fixed now but using local file system..grrrr

So /etc/hosts changes fixed it so that I have no 127.0.0.1 in the logs
anywhere anymore.  (odd I didn't see that in the docs...I must have

missed that...too bad they don't use NetworkInterfaces.getInterfaces
instead of InetAddress.getLocalHost...can find the real ip 

address easier independent of /etc/hosts).

 

Now, my client or hbase(not sure which) is still using the local file
system L.  What config am I missing for this to work..(running the
performance example with just 10 rows to see if it works).

 

10/12/08 07:03:53 INFO zookeeper.ClientCnxn: Priming connection to
java.nio.chan

nels.SocketChannel[connected local=/206.88.43.168:49811
remote=/206.88.43.164:21

81]

10/12/08 07:03:53 INFO zookeeper.ClientCnxn: Server connection
successful

10/12/08 07:03:53 WARN mapred.JobClient: Use GenericOptionsParser for
parsing th

e arguments. Applications should implement Tool for the same.

10/12/08 07:03:53 INFO input.FileInputFormat: Total input paths to
process : 1

10/12/08 07:03:53 INFO hbase.PerformanceEvaluation: Total # of splits:
20

org.apache.hadoop.ipc.RemoteException: java.io.FileNotFoundException:
File file:

/tmp/hadoop-root/mapred/system/job_201012080654_0002/job.xml does not
exist.

        at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys

tem.java:361)

        at
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.

java:245)

        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:192)

        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142)

        at
org.apache.hadoop.fs.LocalFileSystem.copyToLocalFile(LocalFileSystem.

java:61)

        at
org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197)

 

        at
org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:257)

 

From: Hiller, Dean (Contractor) 
Sent: Wednesday, December 08, 2010 1:34 PM
To: 'user@hbase.apache.org'
Subject: hbase/hadoop config(do I need to change /etc/hosts???)-still
getting 127.0.0.1 in logs

 

I am trying to run a system with no DNS(they don't have one in this lab
L ).  Hopefully hbase/hadoop should still work fine, correct?

 

I am having a lot of trouble tracking down why 127.0.0.1 is being used
and why this shows up in hbase master logs...

 

2010-12-08 04:38:26,354 DEBUG
org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper

: Wrote master address 127.0.0.1:60000 to ZooKeeper

 

And

 

2010-12-08 04:38:29,899 INFO
org.apache.hadoop.hbase.master.ServerManager: Recei

ved start message from: localhost.localdomain,60020,1291808309865

2010-12-08 04:38:29,910 DEBUG
org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper

: Updated ZNode /hbase/rs/1291808309865 with data 127.0.0.1:60020

 

I am wondering if I need to tweak my /etc/hosts file?  I am not sure
what java call is being made inside hbase there though.

 

My /etc/hosts file has(which MATCHES above exactly!!!!
"localhost.localdomain")

127.0.0.1               localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

 

My conf files are pretty simple(and currently-though I would be bad in
production-I am sharing all config files through 

a mount so all nodes mount the one config location...this is just for
ease of changing the files though).  What is wrong with

my config.  (I also share this same config with the client I run who
keeps looking up 127.0.0.1 as well as the master).  Am I 

missing some kind of property override that comes from the defaults?  (I
mean, I see 0.0.0.0 but I tend to think hbase may 

be doing a lookup and I need to fix my /etc/hosts file?)

 

Hdfs-site.xml

<configuration>

<property>

  <name>hadoop.tmp.dir</name>

  <value>/opt/data/hadoop</value>

</property>

<property>

  <name>fs.default.name</name>

  <value>hdfs://206.88.43.8:54310</value>

</property>

<property>

  <name>dfs.replication</name>

  <value>8</value>

</property>

</configuration>

 

Mapred-site.xml

<configuration>

<property>

  <name>mapred.job.tracker</name>

  <value>hdfs://206.88.43.4:54311</value>

</property>

<property>

  <name>mapred.child.java.opts</name>

  <value>-Xmx512m</value>

</property>

</configuration>

 

<configuration>

   <property>

      <name>hbase.rootdir</name>

      <value>hdfs://206.88.43.8:54310/hbase</value>

      <description>The directory shared by region servers.

       Should be fully-qualified to include the filesystem to use.

       E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR

      </description>

   </property>

 

   <property>

      <name>hbase.master</name>

      <value>206.88.43.8:60000</value>

      <description>The host and port that the HBase master runs at.

      </description>

   </property>

 

   <property>

    <name>hbase.cluster.distributed</name>

    <value>true</value>

    <description>The mode the cluster will be in. Possible values are

      false: standalone and pseudo-distributed setups with managed
Zookeeper

      true: fully-distributed with unmanaged Zookeeper Quorum (see
hbase-env.sh)

    </description>

  </property>

 

  <property>

      <name>hbase.zookeeper.quorum</name>

      <value>206.88.43.8,206.88.43.4</value>

      <description>Comma separated list of servers in the ZooKeeper
Quorum.

      For example,
"host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".

      By default this is set to localhost for local and
pseudo-distributed modes

      of operation. For a fully-distributed setup, this should be set to
a full

      list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in
hbase-env.

sh

      this is the list of servers which we will start/stop ZooKeeper on.

      </description>

    </property>

 

    <property>

      <name>hbase.zookeeper.property.dataDir</name>

      <value>/opt/data/zookeeper</value>

      <description>Property from ZooKeeper's config zoo.cfg.

      The directory where the snapshot is stored.

      </description>

    </property>

 

</configuration>

 


This message and any attachments are intended only for the use of the
addressee and
may contain information that is privileged and confidential. If the
reader of the 
message is not the intended recipient or an authorized representative of
the
intended recipient, you are hereby notified that any dissemination of
this
communication is strictly prohibited. If you have received this
communication in
error, please notify us immediately by e-mail and delete the message and
any
attachments from your system.


This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.

Reply via email to