when use new version hbase, is restore more quickly
At 2015-09-26 01:21:41, "Alexandre Normand"
wrote:
>We tried decommissioning the slow node and that still didn't help. We then
>increased the timeouts to 90 minutes and we had a successful restore after
>32
We tried decommissioning the slow node and that still didn't help. We then
increased the timeouts to 90 minutes and we had a successful restore after
32 minutes.
Better a slow restore than a failed restore.
Cheers!
On Thu, Sep 24, 2015 at 8:34 PM, Alexandre Normand <
bq. Excluding datanode RS-1:50010
Was RS-1 the only data node to be excluded in that timeframe ?
Have you run fsck to see if hdfs is healthy ?
Cheers
On Thu, Sep 24, 2015 at 7:47 PM, Alexandre Normand <
alexandre.norm...@opower.com> wrote:
> Hi Ted,
> We'll be upgrading to cdh5 in the coming
Hi Ted,
We'll be upgrading to cdh5 in the coming months but we're unfortunately
stuck on 0.94.6 at the moment.
The RS logs were empty around the time of the failed snapshot restore
operation, but the following errors were in the master log. The node
'RS-1' is the only node indicated in the logs.
Hey,
We're trying to restore a snapshot of a relatively big table (20TB) using
hbase 0.94.6-cdh4.5.0 and we're getting timeouts doing so. We increased the
timeout configurations(hbase.snapshot.master.timeoutMillis,
hbase.snapshot.region.timeout, hbase.snapshot.master.timeout.millis) to 10
minutes