I am in a very similar situation.
I guess you can try one of the options.
Option one: avoid online insert by preparing data off-line. Do something like
http://hbase.apache.org/0.94/book/ops_mgt.html#importtsv
Option two: If the first option doesn’t work for you. It will be better to
reduce
Hi Solin,
The timeout messages are usually a consequence of other issues on the
connectivity between the Namenode and the QJM. Assuming Regionservers are
configured properly to HDFS HA, pointing to an HDFS nameservice instead of a
direct namenode address, it should also be resilient to a
w.r.t. option #1, also consider
http://hbase.apache.org/book.html#arch.bulk.load
FYI
On Tue, Dec 15, 2015 at 12:17 PM, Frank Luo wrote:
> I am in a very similar situation.
>
> I guess you can try one of the options.
>
> Option one: avoid online insert by preparing data
Colin:
You may want to take a look at HDFS-8298 where the posted stack trace looks
similar to what you described.
Cheers
On Mon, Dec 14, 2015 at 5:17 PM, Colin Kincaid Williams
wrote:
> We had a namenode go down due to timeout with the hdfs ha qjm journal:
>
>
>
> 2015-12-09
Thanks for your advices.
For option three, I think major compaction on a large region will affect
performance of the region server. So the down time shall be down time for all
the table on that RS, am i Right?
On 12/16/15, 5:12 AM, "Ted Yu" wrote:
>w.r.t. option #1,
bq. the down time shall be down time for all the table on that RS
All the tables should be major compacted, but not necessarily around the
same time.
Major compaction schedule can be adjusted according to off peak hours for
the underlying table(s).
Cheers
On Tue, Dec 15, 2015 at 6:57 PM, 林豪