Hi Mohit,

On Fri, Dec 24, 2010 at 6:15 AM, Mohit <[email protected]> wrote:

>
>
> I created a table say temp via HBase shell and put on entry(row) into that,
> I then deleted the DFS blocks specific to that particular table from both
> of
> my Data Nodes.
>
> I confirmed the same by doing ls on table directory under root HBase
> directory for example :- ./hadoop fs -ls /hbase/temp/123456/colfam and
> there
> was no file(data) under this directory.
>

Deleting HDFS blocks directly on DNs wouldn't have affected the output of fs
-ls.  The fact that you see no data at this point means that you must have
deleted some other files from the DNs (maybe the logs?)


>
> I then perform scan 'temp', it gave me result, Good Ok, I though the data
> is
> cached in Memstore so I could able to get. No problem so far.
>
> Yep, it's probably in memstore.

>
>
> So I restarted the HBase cluster to clear off the cache, and from the shell
> mode , I performed scan 'temp' operation and strangely I could able to
> fetch
> the data whatever I have deleted and also the data file was present under
> table directory.
>

When you restarted HBase, it probably flushed the memstore to the storefile.


>
>
>
> I could able to see the same data , say by doing
>
> ./hadoop fs -cat /hbase/temp/123456/colfam/4588349323987497
>
>
This is the flushed file from above.

-Todd
-- 
Todd Lipcon
Software Engineer, Cloudera

Reply via email to