I'm checking now, but I think there was some kind of network problem during
the time.
I was also asking for future reference =-)
On Mon, Sep 7, 2015 at 4:42 PM, Ted Yu wrote:
> For the region with corrupt hfile, regardless how many times it is
> attempted to be opened, failure to open would res
For the region with corrupt hfile, regardless how many times it is attempted to
be opened, failure to open would result, right ?
> On Sep 7, 2015, at 1:36 PM, Hbase Janitor wrote:
>
> Is there any way to make region in transition try again if the initial try
> failed?
>
> On Mon, Sep 7, 2015
Is there any way to make region in transition try again if the initial try
failed?
On Mon, Sep 7, 2015 at 3:22 PM, Hbase Janitor
wrote:
> looks like I have a corrupt hfile, probably related to the replication
> problems i'm having now.
>
> Thanks for the help!
>
> On Mon, Sep 7, 2015 at 3:16 PM,
looks like I have a corrupt hfile, probably related to the replication
problems i'm having now.
Thanks for the help!
On Mon, Sep 7, 2015 at 3:16 PM, Ted Yu wrote:
> After timeout, the master would re-assign those regions.
>
> Can you check region server log to see why region open failed ?
> Was
After timeout, the master would re-assign those regions.
Can you check region server log to see why region open failed ?
Was there any HFile which couldn't be opened ?
Pastebinning log snippet if needed.
Cheers
On Mon, Sep 7, 2015 at 11:24 AM, Hbase Janitor
wrote:
> Hi,
>
> I have another pro
Hi,
I have another problem, on my hbase 1.0 cluster I have a lot of regions
that seem to be stuck in transition. I saw a exception attempting to
connect to zookeeper around the time they were hung.
Is there a way to force them to retry?
I wanted to add that the message just happen once or twice, it's flooding
the logs.
We had to stop replication and stop the region server to stop it.
On Mon, Sep 7, 2015 at 2:00 PM, Ted Yu wrote:
> WrongRegionException is retriable. Meaning the client would retry upon
> receiving the exception.
No failed jobs, this is happening when the cluster processes replication
events.
On Mon, Sep 7, 2015 at 2:00 PM, Ted Yu wrote:
> WrongRegionException is retriable. Meaning the client would retry upon
> receiving the exception.
>
> Did you observe any failed job(s) ?
>
>
> Cheers
>
> On Mon, Sep
WrongRegionException is retriable. Meaning the client would retry upon
receiving the exception.
Did you observe any failed job(s) ?
Cheers
On Mon, Sep 7, 2015 at 10:54 AM, Hbase Janitor
wrote:
> Ted, thank you for your response.
>
> Yes, there was. Looks like it moved yesterday morning.
>
>
Ted, thank you for your response.
Yes, there was. Looks like it moved yesterday morning.
On Mon, Sep 7, 2015 at 1:47 PM, Ted Yu wrote:
> Was there region movement prior to 11:24:00 (on the region server
> where WrongRegionException
> was observed) ?
>
> Cheers
>
> On Mon, Sep 7, 2015 at 9:59 A
Was there region movement prior to 11:24:00 (on the region server
where WrongRegionException
was observed) ?
Cheers
On Mon, Sep 7, 2015 at 9:59 AM, Hbase Janitor
wrote:
> Hi,
>
> We've recently upgraded to hbase 1.0 and we are seeing a strange error in
> the region logs:
>
> 2015-09-07 11:24:00
Hi,
We've recently upgraded to hbase 1.0 and we are seeing a strange error in
the region logs:
2015-09-07 11:24:00,960 WARN org.apache.hadoop.hbase.regionserver.HRegion:
Failed getting lock in batch put, row=HMV14395619228
org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row
My feeling is that lower requirement for table regions should be:
my_table_region_count > REGION_SERVER_count*3.
Each Region server should get at least one table region, so your read/write
load would be evenly distributed across all region servers in any cases.
*Assumption is that your data is no
For the 96 region table, region size is too small.
In production, I have seen region size as high as 50GB.
FYI
> On Sep 7, 2015, at 2:55 AM, Akmal Abbasov wrote:
>
> Hi,
> I would like to know about pros and cons against small region sizes.
> Currently I have cluster with 5 nodes, which se
I am using titan which use hbase as it's storage engine. The hbase
version is 1.0.0-cdh5.4.4.
it's a full table scan over a large table. Is there any configuration
I can change to tackle this problem.
The exception stack is:
Exception in thread "main" java.lang.RuntimeException:
org.apache.hadoop.
Hi,
I would like to know about pros and cons against small region sizes.
Currently I have cluster with 5 nodes, which serve 5 tables, but there are ~80
regions per node, while actual data(total size of all hstores) is ~50GB.
Isn’t it an overhead, since there is a table which is ~30MB which has 96
16 matches
Mail list logo