Hello Users,

 

One interesting observation, I experienced. 

I'm running a cluster of 2 DN and I NM / SNM and 1 HM and 2 RS.

Replication Strategy is 2.

 

I created a table say temp via HBase shell and put on entry(row) into that,
I then deleted the DFS blocks specific to that particular table from both of
my Data Nodes.

I confirmed the same by doing ls on table directory under root HBase
directory for example :- ./hadoop fs -ls /hbase/temp/123456/colfam and there
was no file(data) under this directory.

I then perform scan 'temp', it gave me result, Good Ok, I though the data is
cached in Memstore so I could able to get. No problem so far.

 

So I restarted the HBase cluster to clear off the cache, and from the shell
mode , I performed scan 'temp' operation and strangely I could able to fetch
the data whatever I have deleted and also the data file was present under
table directory.

 

I could able to see the same data , say by doing

./hadoop fs -cat /hbase/temp/123456/colfam/4588349323987497

 

NOTE : - I didn't touched the .META. table nor dared to alter the entries in
it :-)

 

I couldn't see Hlog being getting replayed in the logs.

 

Kindly brief me about it.

 

-Mohit  

 

****************************************************************************
***********
This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!

 

Reply via email to