[jira] [Created] (HBASE-17500) Implement getTable/creatTable/deleteTable/truncateTable methods

2017-01-19 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-17500:
--

 Summary: Implement getTable/creatTable/deleteTable/truncateTable 
methods
 Key: HBASE-17500
 URL: https://issues.apache.org/jira/browse/HBASE-17500
 Project: HBase
  Issue Type: Sub-task
Reporter: Guanghao Zhang
Assignee: Guanghao Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17499) Bound the total heap memory used for the rolling average of RegionLoads

2017-01-19 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17499:
--

 Summary: Bound the total heap memory used for the rolling average 
of RegionLoads
 Key: HBASE-17499
 URL: https://issues.apache.org/jira/browse/HBASE-17499
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu


Currently "hbase.master.balancer.stochastic.numRegionLoadsToRemember" controls 
the number of RegionLoads which are kept by StochasticLoadBalancer for each 
region.

The parameter doesn't take into account the number of regions in the cluster.
Meaning, the amount of heap consumed by RegionLoads would be out of norm for 
cluster with large number of regions.

This issue is to see if we should bound the total heap memory used for the 
rolling average of RegionLoads.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17498) Implement listTables methods

2017-01-19 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-17498:
--

 Summary: Implement listTables methods
 Key: HBASE-17498
 URL: https://issues.apache.org/jira/browse/HBASE-17498
 Project: HBase
  Issue Type: Sub-task
Reporter: Guanghao Zhang
Assignee: Guanghao Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17496) RSGroup shell commands:get_server_rsgroup don't work and commands display an incorrect result size

2017-01-19 Thread Guangxu Cheng (JIRA)
Guangxu Cheng created HBASE-17496:
-

 Summary: RSGroup shell commands:get_server_rsgroup don't work and 
commands display an incorrect result size
 Key: HBASE-17496
 URL: https://issues.apache.org/jira/browse/HBASE-17496
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0
Reporter: Guangxu Cheng
Assignee: Guangxu Cheng


scenario as follows:
{code}
hbase(main):001:0> get_server_rsgroup 'hbase-01:16030'

ERROR: undefined method `getRSGroupOfServer' for 
#

Here is some help for this command:
Get the group name the given region server is a member of.

  hbase> get_server_rsgroup 'server1:port1'

Took 0.0160 seconds
{code}
{code}
hbase(main):002:0> list_rsgroups
GROUPS
default
1484874115 row(s)
Took 0.3830 seconds
{code}
{code}
hbase(main):003:0> get_table_rsgroup 't1'
default
1484874133 row(s)
Took 0.0100 seconds
{code}
{code}
hbase(main):004:0> get_rsgroup 'default'
GROUP INFORMATION
Servers:
hbase-01:16030
Tables:
hbase:meta
t1
hbase:namespace
hbase:rsgroup
1484874150 row(s)
Took 0.0140 seconds
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15705) Add on meta cache.

2017-01-19 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-15705.
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HBASE-14850

Pushed to branch. 

> Add on meta cache.
> --
>
> Key: HBASE-15705
> URL: https://issues.apache.org/jira/browse/HBASE-15705
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Fix For: HBASE-14850
>
> Attachments: hbase-15705_v2.patch, native-client-meta-cache-v2.patch
>
>
> We need to cache this stuff, and it needs to be fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17495) TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails due to assertion error

2017-01-19 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17495:
--

 Summary: TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning 
intermittently fails due to assertion error
 Key: HBASE-17495
 URL: https://issues.apache.org/jira/browse/HBASE-17495
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu


Looping through the test (based on commit 
76dc957f64fa38ce88694054db7dbf590f368ae7), I saw the following test failure:
{code}
testFlushCacheWhileScanning(org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush)
  Time elapsed: 0.53 sec  <<< FAILURE!
java.lang.AssertionError: toggle=false i=940 ts=1484852861597 expected:<94> but 
was:<92>
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:834)
  at org.junit.Assert.assertEquals(Assert.java:645)
  at 
org.apache.hadoop.hbase.regionserver.TestHRegion.testFlushCacheWhileScanning(TestHRegion.java:3533)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
{code}
See test output for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17494) Guard against cloning family of all cells if no data need be replicated

2017-01-19 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-17494:
-

 Summary: Guard against cloning family of all cells if no data need 
be replicated
 Key: HBASE-17494
 URL: https://issues.apache.org/jira/browse/HBASE-17494
 Project: HBase
  Issue Type: Improvement
Reporter: ChiaPing Tsai
Priority: Trivial


The replication is enabled by default, so we try to clone the family of all 
cells even if there is no replication at all.
{noformat}
  family = CellUtil.cloneFamily(cell);
  // Unexpected, has a tendency to happen in unit tests
  assert htd.getFamily(family) != null;

  if (!scopes.containsKey(family)) {
  int scope = htd.getFamily(family).getScope();
  if (scope != REPLICATION_SCOPE_LOCAL) {
  scopes.put(family, scope);
  }
  }
{noformat}

HBASE-15205 had resolved this issue, but it is committed to master only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17493) TestAsyncTableGetMultiThreadedWithEagerCompaction intermittently fails due to NotServingRegionException

2017-01-19 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17493:
--

 Summary: TestAsyncTableGetMultiThreadedWithEagerCompaction 
intermittently fails due to NotServingRegionException
 Key: HBASE-17493
 URL: https://issues.apache.org/jira/browse/HBASE-17493
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


>From 
>https://builds.apache.org/job/PreCommit-HBASE-Build/5322/artifact/patchprocess/patch-unit-hbase-server.txt
> :
{code}
test(org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreadedWithEagerCompaction)
  Time elapsed: 90.51 sec  <<< ERROR!
org.apache.hadoop.hbase.NotServingRegionException: 
org.apache.hadoop.hbase.NotServingRegionException: Region 
async,222,1484797745093.f9b23b061b6cc56bc801f6d962fc5313. is not online on 
089e51c5fdc1,37640,1484797689531
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3161)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1239)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.splitRegion(RSRpcServices.java:2044)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:25093)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1140)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)

at sun.reflect.GeneratedConstructorAccessor34.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:95)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:85)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:357)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:334)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.split(ProtobufUtil.java:1948)
at org.apache.hadoop.hbase.client.HBaseAdmin.split(HBaseAdmin.java:1685)
at org.apache.hadoop.hbase.client.HBaseAdmin.split(HBaseAdmin.java:1646)
at 
org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded.test(TestAsyncTableGetMultiThreaded.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 

Re: SocketTimeoutException on regionservers

2017-01-19 Thread Yu Li
Have you ever checked the RS gc log and observed any long ones?

Please also check the load of your RS/DN machine when this
SocketTimeoutException happens, there may be some pause caused by system
halt instead of JVM GC.

Hope this helps.

Best Regards,
Yu

On 17 January 2017 at 13:29, Stack  wrote:

> Timeout after waiting ten seconds to read from HDFS is no fun. Tell us more
> about your HDFS setup. You collect system metrics on disks? Machines are
> healthy all around? How frequent does this occur?
>
> Thanks for the question,
> S
>
> On Thu, Jan 12, 2017 at 10:18 AM, Tulasi Paradarami <
> tulasi.krishn...@gmail.com> wrote:
>
> > Hi,
> >
> > I noticed that Regionservers are raising following exceptions
> > intermittently that is manifesting itself as request timeouts on the
> client
> > side. HDFS is in a healthy state and there are no corrupted blocks (from
> > "hdfs fsck" results). Datanodes were not out of service when this error
> > occurs and GC on datanodes is usually around 0.3sec.
> >
> > Also, when these exceptions occur, HDFS metric "Send Data Packet Blocked
> On
> > Network Average Time" tends to go up.
> >
> > Here are the configured values for some of the relevant parameters:
> > dfs.client.socket-timeout: 10s
> > dfs.datanode.socket.write.timeout: 10s
> > dfs.namenode.avoid.read.stale.datanode: true
> > dfs.namenode.avoid.write.stale.datanode: true
> > dfs.datanode.max.xcievers: 8192
> >
> > Any pointers towards what could be causing these exceptions is
> appreciated.
> > Thanks.
> >
> > CDH 5.7.2
> > HBase 1.2.0
> >
> > ---> Regionserver logs
> >
> > 2017-01-11 19:19:04,940 WARN
> >  [PriorityRpcServer.handler=3,queue=1,port=60020]
> hdfs.BlockReaderFactory:
> > I/O error constructing remote block reader.
> > java.net.SocketTimeoutException: 1 millis timeout while waiting for
> > channel to be ready for read. ch :
> > java.nio.channels.SocketChannel[connected local=/datanode3:27094
> > remote=/datanode2:50010]
> > at
> > org.apache.hadoop.net.SocketIOWithTimeout.doIO(
> > SocketIOWithTimeout.java:164)
> > ...
> >
> > 2017-01-11 19:19:04,995 WARN
> >  [PriorityRpcServer.handler=11,queue=1,port=60020] hdfs.DFSClient:
> > Connection failure: Failed to connect to /datanode2:50010 for file
> > /hbase/data/default//ec9ca
> > java.net.SocketTimeoutException: 1 millis timeout while waiting for
> > channel to be ready for read. ch :
> > java.nio.channels.SocketChannel[connected local=/datanode3:27107
> > remote=/datanode2:50010]
> > at
> > org.apache.hadoop.net.SocketIOWithTimeout.doIO(
> > SocketIOWithTimeout.java:164)
> > at
> > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> > at
> > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.
> > readChannelFully(PacketReceiver.java:258)
> > at
> > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(
> > PacketReceiver.java:209)
> > at
> > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(
> > PacketReceiver.java:171)
> > at
> > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.
> > receiveNextPacket(PacketReceiver.java:102)
> > at
> > org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(
> > RemoteBlockReader2.java:207)
> > at
> > org.apache.hadoop.hdfs.RemoteBlockReader2.read(
> > RemoteBlockReader2.java:156)
> > at
> > org.apache.hadoop.hdfs.BlockReaderUtil.readAll(BlockReaderUtil.java:32)
> > at
> > org.apache.hadoop.hdfs.RemoteBlockReader2.readAll(
> > RemoteBlockReader2.java:386)
> > at
> > org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(
> > DFSInputStream.java:1193)
> > at
> > org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(
> > DFSInputStream.java:1112)
> > at
> > org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1473)
> > at
> > org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1432)
> > at
> > org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:89)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(
> > HFileBlock.java:752)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$
> AbstractFSReader.readAtOffset(
> > HFileBlock.java:1448)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.
> > readBlockDataInternal(HFileBlock.java:1648)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.
> > readBlockData(HFileBlock.java:1532)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(
> > HFileReaderV2.java:445)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.
> > loadDataBlockWithScanInfo(HFileBlockIndex.java:261)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(
> > HFileReaderV2.java:642)
> > at
> > 

Successful: HBase Generate Website

2017-01-19 Thread Apache Jenkins Server
Build status: Successful

If successful, the website and docs have been generated. To update the live 
site, follow the instructions below. If failed, skip to the bottom of this 
email.

Use the following commands to download the patch and apply it to a clean branch 
based on origin/asf-site. If you prefer to keep the hbase-site repo around 
permanently, you can skip the clone step.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git

  cd hbase-site
  wget -O- 
https://builds.apache.org/job/hbase_generate_website/464/artifact/website.patch.zip
 | funzip > cb9ce2ceafb5467522b1b380956446e40b8250d5.patch
  git fetch
  git checkout -b asf-site-cb9ce2ceafb5467522b1b380956446e40b8250d5 
origin/asf-site
  git am --whitespace=fix cb9ce2ceafb5467522b1b380956446e40b8250d5.patch

At this point, you can preview the changes by opening index.html or any of the 
other HTML pages in your local 
asf-site-cb9ce2ceafb5467522b1b380956446e40b8250d5 branch.

There are lots of spurious changes, such as timestamps and CSS styles in 
tables, so a generic git diff is not very useful. To see a list of files that 
have been added, deleted, renamed, changed type, or are otherwise interesting, 
use the following command:

  git diff --name-status --diff-filter=ADCRTXUB origin/asf-site

To see only files that had 100 or more lines changed:

  git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}'

When you are satisfied, publish your changes to origin/asf-site using these 
commands:

  git commit --allow-empty -m "Empty commit" # to work around a current ASF 
INFRA bug
  git push origin asf-site-cb9ce2ceafb5467522b1b380956446e40b8250d5:asf-site
  git checkout asf-site
  git branch -D asf-site-cb9ce2ceafb5467522b1b380956446e40b8250d5

Changes take a couple of minutes to be propagated. You can verify whether they 
have been propagated by looking at the Last Published date at the bottom of 
http://hbase.apache.org/. It should match the date in the index.html on the 
asf-site branch in Git.

As a courtesy- reply-all to this email to let other committers know you pushed 
the site.



If failed, see https://builds.apache.org/job/hbase_generate_website/464/console

[jira] [Created] (HBASE-17492) Fix the compacting memstore part in hbase shell ruby script

2017-01-19 Thread Anastasia Braginsky (JIRA)
Anastasia Braginsky created HBASE-17492:
---

 Summary: Fix the compacting memstore part in hbase shell ruby 
script 
 Key: HBASE-17492
 URL: https://issues.apache.org/jira/browse/HBASE-17492
 Project: HBase
  Issue Type: Sub-task
Reporter: Anastasia Braginsky


Make the MemoryCompaction enum, not an internal class of HColumnDescriptor, but 
an external class. This enum is later used in the ruby script and the ruby 
script doesn't accept this internal class and proceeds with an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17491) Remove all setters from HTable interface and introduce a TableBuilder to build Table instance

2017-01-19 Thread Yu Li (JIRA)
Yu Li created HBASE-17491:
-

 Summary: Remove all setters from HTable interface and introduce a 
TableBuilder to build Table instance
 Key: HBASE-17491
 URL: https://issues.apache.org/jira/browse/HBASE-17491
 Project: HBase
  Issue Type: Sub-task
Reporter: Yu Li
Assignee: Yu Li


As titled, we will remove all setters in HTable for master branch and deprecate 
them for branch-1 to make HTable thread-safe. And a new {{TableBuilder}} 
interface will be introduced to build Table instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-8065) bulkload can load the hfile into hbase table,but this mechanism can't remove prior data

2017-01-19 Thread Yuan Kang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Kang resolved HBASE-8065.
--
Resolution: Duplicate

> bulkload can load the hfile into hbase table,but this mechanism can't remove 
> prior data
> ---
>
> Key: HBASE-8065
> URL: https://issues.apache.org/jira/browse/HBASE-8065
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC, mapreduce, regionserver
>Affects Versions: 0.94.0
> Environment: hadoop-1.0.2、hbase-0.94.0
>Reporter: Yuan Kang
>Assignee: Yuan Kang
>Priority: Critical
> Attachments: LoadIncrementalHFiles-bulkload-can-clean-olddata.patch
>
>
> this patch can do bulkload for one more parameter ‘need to refresh’,when this 
> parameter is true.bulkload can clean the old date in the hbase table ,then do 
> the new date load



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17490) hbck can't check zk /hbase/table-lock inconsistency

2017-01-19 Thread Yuan Kang (JIRA)
Yuan Kang created HBASE-17490:
-

 Summary: hbck can't check zk /hbase/table-lock inconsistency
 Key: HBASE-17490
 URL: https://issues.apache.org/jira/browse/HBASE-17490
 Project: HBase
  Issue Type: Bug
  Components: Admin
Affects Versions: 0.98.21
 Environment: apache 0.98.21
Reporter: Yuan Kang


when I find a table orphanedZnodes ,I delete the zk /hbase/table/tablename in 
zk. not using hbck -fixOrphanedTableZnodes

when I create the table again,I went to a 'table exist' error.
But I run hbck again ,It tells me there are no problem

then I check the hbase:meta/ zk/ master  ,then find a 
/hbase/table-lock/tablename (which was droped at all). I guess it may be the 
reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)