Try to run "habase hbck -fix"
It should do the job.
Thank you!
Sincerely,
Leonid Fedotov
On Apr 12, 2013, at 9:56 AM, Brennon Church wrote:
> hbck does show the hdfs files there without associated regions. I probably
> could have recovered had I noticed just after this happened, but given tha
hbck does show the hdfs files there without associated regions. I
probably could have recovered had I noticed just after this happened,
but given that we've been running like this for over a week, and that
there is the potential for collisions between the missing and new data,
I'm probably jus
Brennon:
Can you try hbck to see if the problem is repaired ?
Thanks
On Fri, Apr 12, 2013 at 9:27 AM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:
> Oh..sorry to hear that . But i think it should be there in the system but
> not allowing you to access. We should be able to b
Oh..sorry to hear that . But i think it should be there in the system but
not allowing you to access. We should be able to bring it back.
One set of logs that would be of interest is that of the RS and master when
the split happened.
And the main thing would be that when you restarted your clus
Hello,
We lost the data when the parent regions got reopened. My guess, and
it's only that, is that the regions were essentially empty when they
started up again in these cases. We definitely lost data from the tables.
I've looked through the hdfs and hbase logs and can't find any obvious
Brennon:
Have you run hbck to diagnose the problem ?
Since the issue might have involved hdfs, browsing DataNode log(s) may
provide some clue as well.
What hadoop version are you using ?
Cheers
On Thu, Apr 11, 2013 at 10:58 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:
>
When you say that the parent regions got reopened does that mean that you
did not lose any data(any data could not be read). The reason am asking is
if after the parent got split into daughters and the data was written to
daughters and if the daughters related files could not be opened you could
h
Hello,
I had an interesting problem come up recently. We have a few thousand
regions across 8 datanode/regionservers. I made a change, increasing
the heap size for hadoop from 128M to 2048M which ended up bringing the
cluster to a complete halt after about 1 hour. I reverted back to 128M
a