Hi,
We are trying to restore HBase from a backup saved to AWS S3 but "hbase
shell" command is stuck at this step:
> DEBUG org.apache.zookeeper.ClientCnxn - Reading reply
> sessionid:0x148af374d1f0031, packet:: clientPath:null serverPath:null
> finished:false header:: 8,4 replyHeader:: 8,262,-101
Hi All,
Having the problem below, I dont know how to troubleshoot it further, but I
can provide any required information. Running CDH 4.3.0.0 aka 0.94.6 on
RHEL 6.2
Hbase shows an increasing number of IPC Threads in BLOCKED state
Hundreds of these,more and more appearing over hours, performance
Hi All,
Having the problem below, I dont know how to troubleshoot it further, but I
can provide any required information. Running CDH 4.3.0.0 aka 0.94.6 on
RHEL 6.2
Hbase shows an increasing number of IPC Threads in BLOCKED state
Hundreds of these,more and more appearing over hours, performance
Thank you Ted.
-Nishan
On Thu, Sep 25, 2014 at 11:56 AM, Ted Yu wrote:
> There should not be impact to hbase write performance for two column
> families.
>
> Cheers
>
> On Thu, Sep 25, 2014 at 10:53 AM, Nishanth S
> wrote:
>
> > Thank you Ted.No I do not plan to use bulk loading since the da
There should not be impact to hbase write performance for two column
families.
Cheers
On Thu, Sep 25, 2014 at 10:53 AM, Nishanth S
wrote:
> Thank you Ted.No I do not plan to use bulk loading since the data is
> incremental in nature.
>
> On Thu, Sep 25, 2014 at 11:36 AM, Ted Yu wrote:
>
> >
Deleting the contents of /apps/hbase/data/.tmp fixed the problem
On Sep 25, 2014, at 1:48 PM, Ted Yu wrote:
> bq. is it safe to delete that stuff?
> Yes. You have the exported snapshot as source of truth.
>
> On Thu, Sep 25, 2014 at 10:43 AM, Brian Jeltema <
> brian.jelt...@digitalenvoy.net> w
Thank you Ted.No I do not plan to use bulk loading since the data is
incremental in nature.
On Thu, Sep 25, 2014 at 11:36 AM, Ted Yu wrote:
> For #1, do you plan to use bulk load ?
>
> For #3, take a look at HBASE-5416 which introduced essential column family.
> In your query, you can designa
bq. is it safe to delete that stuff?
Yes. You have the exported snapshot as source of truth.
On Thu, Sep 25, 2014 at 10:43 AM, Brian Jeltema <
brian.jelt...@digitalenvoy.net> wrote:
>
> > Does hbck report any inconsistency ?
>
> Not for the table in question. There are inconsistencies in an unrel
> Does hbck report any inconsistency ?
Not for the table in question. There are inconsistencies in an unrelated table.
I do see related content in:
/apps/hbase/data/.tmp/data/default/Foo
is it safe to delete that stuff?
>
> Cheers
>
> On Thu, Sep 25, 2014 at 9:52 AM, Brian Jeltema <
> b
For #1, do you plan to use bulk load ?
For #3, take a look at HBASE-5416 which introduced essential column family.
In your query, you can designate the smaller column family as essential
column family where smaller columns are queried.
Cheers
On Thu, Sep 25, 2014 at 9:57 AM, Nishanth S wrote:
Does hbck report any inconsistency ?
Cheers
On Thu, Sep 25, 2014 at 9:52 AM, Brian Jeltema <
brian.jelt...@digitalenvoy.net> wrote:
> Can’t drop it. HBase doesn’t think the table exists.
>
> On Sep 25, 2014, at 12:50 PM, Ted Yu wrote:
>
> > You can drop the table (run hbck afterwards if necessa
Hi everyone,
This question may have been asked many times but I would really appreciate
if some one can help me on how to go about this.
Currently my hbase table consists of about 10 columns per row which in
total has an average size of 5K.The chunk of the size is held by one
particular col
Can’t drop it. HBase doesn’t think the table exists.
On Sep 25, 2014, at 12:50 PM, Ted Yu wrote:
> You can drop the table (run hbck afterwards if necessary).
> Then restore again.
>
> If it hangs again, please capture stack trace.
>
> Cheers
>
> On Thu, Sep 25, 2014 at 9:32 AM, Brian Jeltema
You can drop the table (run hbck afterwards if necessary).
Then restore again.
If it hangs again, please capture stack trace.
Cheers
On Thu, Sep 25, 2014 at 9:32 AM, Brian Jeltema <
brian.jelt...@digitalenvoy.net> wrote:
> The table did not exist on the target cluster when I tried the first
> r
The table did not exist on the target cluster when I tried the first
restore_clone.
Is there some way I can delete all traces of the table and start over?
On Sep 25, 2014, at 12:25 PM, Ted Yu wrote:
> It is from the following in CloneSnapshotHandler.java :
>
> Preconditions.checkArgument(
It is from the following in CloneSnapshotHandler.java :
Preconditions.checkArgument(!metaChanges.hasRegionsToRestore(),
"A clone should not have regions to restore");
Was there region split prior to snapshot restore action ?
Cheers
On Thu, Sep 25, 2014 at 9:19 AM, Brian Jeltema
I exported a snapshot to another cluster, same version of all software. A
restore_snapshot on the target
system hung and eventually timed out, I think due to file ownership issues. I
restored hbase ownership
to everything in /apps/hbase and tried the restore_snapshot again. It’s still
hanging, b
HBASE-12095 was logged by Ashish.
I provided patch on that issue.
FYI
On Thu, Sep 25, 2014 at 1:45 AM, Ted Yu wrote:
> Can you try the patch below ?
>
> http://pastebin.com/SnYZQf7Y
>
> Cheers
>
> On Wed, Sep 24, 2014 at 10:36 PM, ashish singhi
> wrote:
>
>> Hi All.
>>
>> I am using 0.98.6 HB
Can you try the patch below ?
http://pastebin.com/SnYZQf7Y
Cheers
On Wed, Sep 24, 2014 at 10:36 PM, ashish singhi
wrote:
> Hi All.
>
> I am using 0.98.6 HBase.
>
> I observed that when I have the following value set in my hbase-site.xml
> file
>
>
> hbase.regionserver.wal.encryption
> false
>
19 matches
Mail list logo