Not able to Disable table : ERROR: org.apache.hadoop.hbase.RegionException: Retries exhausted, it took too long to wait for the table testtable2 to be disabled.

2012-04-16 Thread Narayanan K
Hi,

I am not able to disable an HBase table. Checked out the link:
https://issues.apache.org/jira/browse/HBASE-2812

But is this resolved in 0.90.3.? If so, why is the error occurring after
repeated trials.

The details of our Hbase installations are as follows:

$ hbase
HBase Shell; enter 'helpRETURN' for list of supported commands.
Type exitRETURN to leave the HBase Shell
*Version 0.90.3*, r1125027, Wed Dec  7 16:46:36 PST 2011

hbase(main):001:0 status
366 servers, 15 dead, 127.5164 average load

hbase(main):003:0 describe 'testtable2'
DESCRIPTION
ENABLED
 {NAME = 'testtable2', FAMILIES = [{NAME = 'colfam1', BLOOMFILTER =
'NONE', REPLICATION_SCOPE = '0', COMPRESS true
 ION = 'NONE', VERSIONS = '3', TTL = '2147483647', BLOCKSIZE = '65536',
IN_MEMORY = 'false', BLOCKCACHE = 't
 rue'}]}
1 row(s) in 2.1750 seconds

*hbase(main):010:0 disable 'testtable2'

**ERROR: org.apache.hadoop.hbase.RegionException: Retries exhausted, it
took too long to wait for the table testtable2 to be disabled.

Here is some help for this command:
Start disable of named table: e.g. hbase disable 't1'

*This table data has occupied 1 Region Server which has 133 regions.

Any help is much appreciated.

Regards,
Narayanan


Re: Is it possible to install two different Hbase versions in the same Cluster?

2012-04-16 Thread Harsh J
Yes it should be fine to do if you make appropriate configuration
changes. Mainly the ZK root data directory (if you're sharing the
zk-quorum too), HDFS base root-dir, and all the service ports.

On Mon, Apr 16, 2012 at 5:01 PM, yonghu yongyong...@gmail.com wrote:
 Hello,

 I wonder if it's possible to install two different Hbase versions in
 the same cluster?

 Thanks

 Yong



-- 
Harsh J


HBase 0.92 with Hadoop 0.22

2012-04-16 Thread Konrad Tendera
I'm wondering if there is any possibility to run HBase 0.92 on top of Hadoop 
0.22? I can't find necessary jars such as hadoop-core...

-- 
Konrad Tendera


Re: HBase 0.92 with Hadoop 0.22

2012-04-16 Thread yonghu
yes. You can compile the hadoop jar file  by yourself and put into the
Hbase lib folder.

Regards!

Yong

On Mon, Apr 16, 2012 at 2:09 PM, Harsh J ha...@cloudera.com wrote:
 While I haven't tried this personally, it should be alright to do. You
 need to replace HBase's default hadoop jars (which are 1.0.x/0.20
 versioned) with those (of common and hdfs) from your 0.22
 installation.

 Apache Bigtop too has a branch for hadoop-0.22 that helps you build a
 whole 0.22-based, tested and packaged stack for yourself:
 https://svn.apache.org/repos/asf/incubator/bigtop/branches/hadoop-0.22/

 On Mon, Apr 16, 2012 at 5:30 PM, Konrad Tendera ema...@tendera.eu wrote:
 I'm wondering if there is any possibility to run HBase 0.92 on top of Hadoop 
 0.22? I can't find necessary jars such as hadoop-core...

 --
 Konrad Tendera



 --
 Harsh J


Re: Is it possible to install two different Hbase versions in the same Cluster?

2012-04-16 Thread Michel Segel
Well, you could, however you run a greater risk of things breaking because you 
forgot to change a setting in the configuration file. You would have to change 
port listeners, location of config files, all sorts of things that you wouldn't 
have to change if you just segmented the nodes, different zk quorum and hmaster.

I mean the simple answer is sure, why not. But the longer answer is that you 
need to think more about what you want to do, why you want to do it, and what 
the least invasive way of doing it.

Sent from a remote device. Please excuse any typos...

Mike Segel

On Apr 16, 2012, at 6:39 AM, yonghu yongyong...@gmail.com wrote:

 Mike,
 
 Can you explain why I can't put the RS on the same node?
 
 Thanks!
 
 Yong
 
 On Mon, Apr 16, 2012 at 1:33 PM, Michel Segel michael_se...@hotmail.com 
 wrote:
 Sure, just make sure you don't cross the configurations and don't put the RS 
 on the same nodes.
 
 
 Sent from a remote device. Please excuse any typos...
 
 Mike Segel
 
 On Apr 16, 2012, at 6:31 AM, yonghu yongyong...@gmail.com wrote:
 
 Hello,
 
 I wonder if it's possible to install two different Hbase versions in
 the same cluster?
 
 Thanks
 
 Yong
 
 


Re: hbase coprocessor unit testing

2012-04-16 Thread Alex Baranau
Here's some code that worked for me [1]. You may also find useful to look
at the pom's dependencies [2].

Alex Baranau
--
Sematext :: http://blog.sematext.com/ :: Solr - Lucene - Hadoop - HBase

[1]

From
https://github.com/sematext/HBaseHUT/blob/CPs/src/test/java/com/sematext/hbase/hut/cp/TestHBaseHutCps.java:

 private HBaseTestingUtility testingUtility = new HBaseTestingUtility();
  private HTable hTable;

  @Before
  public void before() throws Exception {
testingUtility.getConfiguration().setStrings(
CoprocessorHost.USER_REGION_COPROCESSOR_CONF_KEY,
HutReadEndpoint.class.getName());
testingUtility.startMiniCluster();
hTable = testingUtility.createTable(Bytes.toBytes(TABLE_NAME), SALE_CF);
  }

  @After
  public void after() throws Exception {
hTable = null;
testingUtility.shutdownMiniCluster();
testingUtility = null;
  }

  [... unit-tests that make use of deployed CP ...]

[2]

Full version: https://github.com/sematext/HBaseHUT/blob/CPs/pom.xml

hadoop.version1.0.0/hadoop.version
hbase.version0.92.1/hbase.version

[...]

dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-core/artifactId
  version${hadoop.version}/version
  scopeprovided/scope
  exclusions
exclusion
  groupIdorg.codehaus.jackson/groupId
  artifactIdjackson-mapper-asl/artifactId
/exclusion
exclusion
  groupIdorg.codehaus.jackson/groupId
  artifactIdjackson-core-asl/artifactId
/exclusion
  /exclusions
/dependency
dependency
  groupIdorg.apache.hbase/groupId
  artifactIdhbase/artifactId
  version${hbase.version}/version
  scopeprovided/scope
/dependency

!-- Tests dependencies --
dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-test/artifactId
  version${hadoop.version}/version
  scopetest/scope
/dependency
dependency
  groupIdorg.apache.hbase/groupId
  artifactIdhbase/artifactId
  version${hbase.version}/version
  classifiertests/classifier
  scopetest/scope
/dependency

On Mon, Apr 16, 2012 at 9:10 AM, Marcin Cylke mcl.hb...@touk.pl wrote:

 Hi

 I'm trying to write a unit test for HBase coprocessor. However it seems
 I'm doing something horribly wrong. The code I'm using to test my
 coprocessor class is in the attachment.

 As you can see, I'm using HBaseTestingUtility, and running a
 mini-cluster with it. The error I keep getting is:

 2012-04-12 13:00:39,924 [6,1334228432020] WARN  RecoverableZooKeeper
  :117 - Node /hbase/root-region-server already deleted, and this is
 not a retry
 2012-04-12 13:00:39,995 [6,1334228432020] INFO  HBaseRPC
  :240 - Server at localhost/127.0.0.1:45664 could not be reached
 after 1 tries, giving up.
 2012-04-12 13:00:39,995 [6,1334228432020] WARN  AssignmentManager
  :1493 - Failed assignment of -ROOT-,,0.70236052 to
 localhost,45664,1334228432229, trying to assign elsewhere instead; retry=0
 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting
 up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to
 localhost/127.0.0.1:45664 after attempts=1
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:242)
at

 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1278)
at

 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1235)
at

 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1222)
at

 org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:496)
at

 org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:429)
at

 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1453)
at

 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1200)
at

 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1175)
at

 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1170)
at

 org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:1918)
at
 org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:557)
at

 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:491)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
at java.lang.Thread.run(Thread.java:662)
 Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at

 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:656)
  

regions stuck in transition

2012-04-16 Thread Bryan Beaudreault
Hello,

We've recently had a problem where regions will get stuck in transition for
a long period of time.  In fact, they don't ever appear to get
out-of-transition unless we take manual action.  Last time this happened I
restarted the master and they were cleared out.  This time I wanted to
consult the list first.

I checked the admin ui for all 24 of our servers, and the region does not
appear to be hosted anywhere.  If I look in hdfs, I do see the region there
and it has 2 files.  The first instance of this region in my HMaster logs
is:

2/04/15 17:48:06 INFO master.HMaster: balance
 hri=visitor-activities-a2,\x00\x02EG120909,1333750824238.703fed4411f2d6ff4b3ea80506fb635e.,
 src=X.ec2.internal,60020,1334064456919,
 dest=.ec2.internal,60020,1334064197946
 12/04/15 17:48:06 INFO master.AssignmentManager: Server
 serverName=.ec2.internal,60020,1334064456919, load=(requests=0,
 regions=0, usedHeap=0, maxHeap=0) returned
 org.apache.hadoop.hbase.NotServingRegionException:
 org.apache.hadoop.hbase.NotServingRegionException: Received close for
 visitor-activities-a2,\x00\x02EG120909,1333750824238.703fed4411f2d6ff4b3ea80506fb635e.
 but we are not serving it for 703fed4411f2d6ff4b3ea80506fb635e


It then keeps saying the same few logs every ~30 mins:

12/04/15 18:18:18 INFO master.AssignmentManager: Regions in transition
 timed out:
  
 visitor-activities-a2,\x00\x02EG120909,1333750824238.703fed4411f2d6ff4b3ea80506fb635e.
 state=PENDING_CLOSE, ts=1334526491544, server=null
 12/04/15 18:18:18 INFO master.AssignmentManager: Region has been
 PENDING_CLOSE for too long, running forced unassign again on
 region=visitor-activities-a2,\x00\x02EG120909,1333750824238.703fed4411f2d6ff4b3ea80506fb635e.
 12/04/15 18:18:18 INFO master.AssignmentManager: Server
 serverName=X.ec2.internal,60020,1334064456919, load=(requests=0,
 regions=0, usedHeap=0, maxHeap=0) returned
 org.apache.hadoop.hbase.NotServingRegionException:
 org.apache.hadoop.hbase.NotServingRegionException: Received close for
 visitor-activities-a2,\x00\x02EG120909,1333750824238.703fed4411f2d6ff4b3ea80506fb635e.
 but we are not serving it for 703fed4411f2d6ff4b3ea80506fb635e


Any ideas how I can avoid this, or a better solution than restarting the
HMaster?

Thanks,

Bryan


Re: Help: ROOT and META!!

2012-04-16 Thread Jonathan Hsieh
Arber,

Good to hear! Just to confirm, the bug/patch the same as HBASE-5488?

Jon.

On Sun, Apr 15, 2012 at 4:36 AM, Yabo Xu arber.resea...@gmail.com wrote:

 Hi Jon:

 Please ignore my last email. We found it was a bug, fix it by a patch and
 rebuild, and it works now. Data are back! Thanks.

 Best,
 Arber



 On Sun, Apr 15, 2012 at 12:47 PM, Yabo Xu arber.resea...@gmail.com
 wrote:

  Dear Jon:
 
  We just ran OfflineMetaRepair, while getting the following exceptions.
  Checked online...it seems that is bug. Any suggestions on how to check
 out
  the most-updated version of OfflineMetaRepair to work with our version of
  HBase? Thanks in advance.
 
  12/04/15 12:28:35 INFO util.HBaseFsck: Loading HBase regioninfo from
  HDFS...
  12/04/15 12:28:39 ERROR util.HBaseFsck: Bailed out due to:
  java.lang.IllegalArgumentException: Wrong FS: hdfs://
 
 n4.example.com:12345/hbase/summba.yeezhao.content/03cde9116662fade27545d86ea71a372/.regioninfo
 ,
  expected: file:///
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
  at
 
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
   at
 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
  at
 
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
   at
 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:125)
  at
  org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
  at
 org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256)
   at
  org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284)
  at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402)
   at
 
 org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRepair.java:90)
 
  We checked on hdfs, and the files shown in exception are available. Any
  point
 
  Best,
  Arber
 
 
  On Sun, Apr 15, 2012 at 11:48 AM, Yabo Xu arber.resea...@gmail.com
 wrote:
 
  Thanks, St. Ack  Jon. To answer St. Ack's question, we are using HBase
  0.90.6, and the data corruption happens when some data nodes are lost
 due
  to the power issue. We've tried hbck and it reports that ROOT is not
 found,
  and hfsk reports two blocks of ROOT and META are CORUPT status.
 
  Jon: We just checked OfflineMetaRepair, it seems to be the right tool,
  and is trying it now. Just want to confirm: is it compatible with
 0.90.6?
 
  Best,
  Arber
 
 
  On Sun, Apr 15, 2012 at 8:55 AM, Jonathan Hsieh j...@cloudera.com
 wrote:
 
  There is a two tools that can try to help you (unfortunately, I haven't
  written the user documentation for either yet)
 
  One is called OfflineMetaRepair.  This assumes that hbase is offline
  reads
  the data in HDFS  to create a new ROOT and new META.  If you data is in
  good shape, this should work for you. Depending  on which version of
  hadoop
  you are using, you may need to apply HBASE-5488.
 
  On the latest branches of hbase (0.90/0.92/0.94/trunk) the hbck tool
 has
  been greatly enhanced and may be able to help out as well once an
 initial
  META table is built, and your hbase is able to get online.  This will
  currently will require a patch HBASE-5781 to be applied to be useful.
 
  Jon.
 
 
  On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu arber.resea...@gmail.com
  wrote:
 
   Hi all:
  
   Just had a desperate  nightWe had a small production hbase
  cluster( 8
   nodes), and due to the accident crash of a few nodes, ROOT and META
 are
   corrupted, while the rest of tables are mostly there. Are there any
  way to
   restore ROOT and META?
  
   Any of the hints would be appreciated very much! Waiting on line...
  
   Best,
   Arber
  
 
 
 
  --
  // Jonathan Hsieh (shay)
  // Software Engineer, Cloudera
  // j...@cloudera.com
 
 
 
 




-- 
// Jonathan Hsieh (shay)
// Software Engineer, Cloudera
// j...@cloudera.com


Re: Help: ROOT and META!!

2012-04-16 Thread Yabo Xu
Yes, it is. Thanks.

Best,
Arber



On Tue, Apr 17, 2012 at 12:05 AM, Jonathan Hsieh j...@cloudera.com wrote:

 Arber,

 Good to hear! Just to confirm, the bug/patch the same as HBASE-5488?

 Jon.

 On Sun, Apr 15, 2012 at 4:36 AM, Yabo Xu arber.resea...@gmail.com wrote:

  Hi Jon:
 
  Please ignore my last email. We found it was a bug, fix it by a patch and
  rebuild, and it works now. Data are back! Thanks.
 
  Best,
  Arber
 
 
 
  On Sun, Apr 15, 2012 at 12:47 PM, Yabo Xu arber.resea...@gmail.com
  wrote:
 
   Dear Jon:
  
   We just ran OfflineMetaRepair, while getting the following exceptions.
   Checked online...it seems that is bug. Any suggestions on how to check
  out
   the most-updated version of OfflineMetaRepair to work with our version
 of
   HBase? Thanks in advance.
  
   12/04/15 12:28:35 INFO util.HBaseFsck: Loading HBase regioninfo from
   HDFS...
   12/04/15 12:28:39 ERROR util.HBaseFsck: Bailed out due to:
   java.lang.IllegalArgumentException: Wrong FS: hdfs://
  
 
 n4.example.com:12345/hbase/summba.yeezhao.content/03cde9116662fade27545d86ea71a372/.regioninfo
  ,
   expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
   at
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
at
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
   at
  
 
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at
  
 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:125)
   at
  
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
   at
  org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256)
at
  
 org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284)
   at
 org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402)
at
  
 
 org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRepair.java:90)
  
   We checked on hdfs, and the files shown in exception are available. Any
   point
  
   Best,
   Arber
  
  
   On Sun, Apr 15, 2012 at 11:48 AM, Yabo Xu arber.resea...@gmail.com
  wrote:
  
   Thanks, St. Ack  Jon. To answer St. Ack's question, we are using
 HBase
   0.90.6, and the data corruption happens when some data nodes are lost
  due
   to the power issue. We've tried hbck and it reports that ROOT is not
  found,
   and hfsk reports two blocks of ROOT and META are CORUPT status.
  
   Jon: We just checked OfflineMetaRepair, it seems to be the right tool,
   and is trying it now. Just want to confirm: is it compatible with
  0.90.6?
  
   Best,
   Arber
  
  
   On Sun, Apr 15, 2012 at 8:55 AM, Jonathan Hsieh j...@cloudera.com
  wrote:
  
   There is a two tools that can try to help you (unfortunately, I
 haven't
   written the user documentation for either yet)
  
   One is called OfflineMetaRepair.  This assumes that hbase is offline
   reads
   the data in HDFS  to create a new ROOT and new META.  If you data is
 in
   good shape, this should work for you. Depending  on which version of
   hadoop
   you are using, you may need to apply HBASE-5488.
  
   On the latest branches of hbase (0.90/0.92/0.94/trunk) the hbck tool
  has
   been greatly enhanced and may be able to help out as well once an
  initial
   META table is built, and your hbase is able to get online.  This will
   currently will require a patch HBASE-5781 to be applied to be useful.
  
   Jon.
  
  
   On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu arber.resea...@gmail.com
   wrote:
  
Hi all:
   
Just had a desperate  nightWe had a small production hbase
   cluster( 8
nodes), and due to the accident crash of a few nodes, ROOT and META
  are
corrupted, while the rest of tables are mostly there. Are there any
   way to
restore ROOT and META?
   
Any of the hints would be appreciated very much! Waiting on line...
   
Best,
Arber
   
  
  
  
   --
   // Jonathan Hsieh (shay)
   // Software Engineer, Cloudera
   // j...@cloudera.com
  
  
  
  
 



 --
 // Jonathan Hsieh (shay)
 // Software Engineer, Cloudera
 // j...@cloudera.com



Re: Zookeeper available but no active master location found

2012-04-16 Thread Henri Pipe
Still having the same problem:

Here is the master log

2012-04-16 15:26:32,717 INFO
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
ZooKeeper available but no active master location found
2012-04-16 15:26:32,718 INFO
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
getMaster attempt 9 of 10 failed; no more retrying.
org.apache.hadoop.hbase.MasterNotRunningException
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:564)
at org.apache.hadoop.hbase.client.HBaseAdmin.init(HBaseAdmin.java:95)
at
org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:55)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:829)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

and here is what it says from zookeeper shell

[zk: localhost:2181(CONNECTED) 0] ls /hbase
[splitlog, unassigned, root-region-server, rs, table, master, shutdown]
[zk: localhost:2181(CONNECTED) 1] get /hbase/master
ip-10-251-27-130.ec2.internal:6
cZxid = 0xd0025
ctime = Mon Apr 16 15:25:18 EDT 2012
mZxid = 0xd0025
mtime = Mon Apr 16 15:25:18 EDT 2012
pZxid = 0xd0025
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x136bc97498d0003
dataLength = 35
numChildren = 0

and here is my /etc/hosts

[root@ip-10-251-27-130 bin]# cat /etc/hosts
127.0.0.1   localhost   localhost.localdomain
10.250.9.220ip-10-250-9-220 zoo1
10.251.110.50ip-10-251-110-50 zoo2
10.250.54.148ip-10-250-54-148 datanode
10.251.27.130   ip-10-251-27-130 namenode ip-10-251-27-130.ec2.internal

I run zookeepers on namenode, zoo1 and zoo2

Thanks

Henri Pipe


On Fri, Apr 13, 2012 at 1:01 PM, Stack st...@duboce.net wrote:

 What do you see in the master log?
 St.Ack

 On Fri, Apr 13, 2012 at 11:00 AM, Henri Pipe henri.p...@gmail.com wrote:
  I had tried zkCli (ls /hbase and get /hbase/master) , but it returns the
  correct value.
 
  [zk: localhost:2181(CONNECTED) 2] get /hbase/master
  ip-10-251-27-130:6
  cZxid = 0xa0032
  ctime = Thu Apr 12 20:03:23 EDT 2012
  mZxid = 0xa0032
  mtime = Thu Apr 12 20:03:23 EDT 2012
  pZxid = 0xa0032
 
  Also, I do have the namenode listed in my config
 
  Here is my hbase-site.xml file:
 
  configuration
   property
 namehbase.rootdir/name
 valuehdfs://namenode:9000/hbase/value
   /property
 
  Henri Pipe
 
 
  On Fri, Apr 13, 2012 at 1:58 AM, N Keywal nkey...@gmail.com wrote:
 
  Hi,
 
  Literally, it means that ZooKeeper is there but the hbase client can't
 find
  the hbase master address in it.
  By default, the node used is /hbase/master, and it contains the hostname
  and port of the master.
 
  You can check its content in ZK by doing a get /hbase/master in
  bin/zkCli.sh (see
 
 
 http://zookeeper.apache.org/doc/r3.4.3/zookeeperStarted.html#sc_ConnectingToZooKeeper
  ).
 
  There should be a root cause for this, so it worths looking for other
 error
  messages in the logs (master especially).
 
  N.
 
  On Fri, Apr 13, 2012 at 1:23 AM, Henri Pipe henri.p...@gmail.com
 wrote:
 
   client.HConnectionManager$HConnectionImplementation: ZooKeeper
 available
   but no active master location found
  
   Having a problem with master startup that I have not seen before.
  
   running the following packages:
  
   hadoop-hbase-0.90.4+49.137-1
   hadoop-0.20-secondarynamenode-0.20.2+923.197-1
   

Is htable.delete(ListDelete) transactional?

2012-04-16 Thread Haijia Zhou
Very simple question as the subject shows:
Is htable.delete(ListDelete) transactional?
Say if I am to delete 1000 rows and in the middle of deletion some error
occurs, then will the whole deletion operation get rolled back or will it
end up with  partial deletion?

Thanks


Re: Is htable.delete(ListDelete) transactional?

2012-04-16 Thread Haijia Zhou
I see, thanks a lot!

On Mon, Apr 16, 2012 at 7:41 PM, Ian Varley ivar...@salesforce.com wrote:

 More complex answer: generally, nothing that involves more than a single
 row in HBase is transactional. :)

 It's possible that HBase might get some limited form of multi-row
 transactions in the future (see HBase-5229
 https://issues.apache.org/jira/browse/HBASE-5229 for more on that) but
 even then, things would only be transactional within a single region
 server, which means it's not really a general solution for the case you
 mention below (short of some external guarantee that all of your deletes
 are on the same RS).

 That said: mutations are generally idempotent in HBase (except for
 increments). So if you get an exception, it's usually OK to just retry the
 whole thing.

 Ian

 On Apr 16, 2012, at 6:33 PM, Jean-Daniel Cryans wrote:

 Simple answer: it's not transactional.

 J-D

 On Mon, Apr 16, 2012 at 4:28 PM, Haijia Zhou leons...@gmail.commailto:
 leons...@gmail.com wrote:
 Very simple question as the subject shows:
 Is htable.delete(ListDelete) transactional?
 Say if I am to delete 1000 rows and in the middle of deletion some error
 occurs, then will the whole deletion operation get rolled back or will it
 end up with  partial deletion?

 Thanks