Jerry,
Can you elaborate on what you mean by export table to hdfs? I
initially tried running the export on src cluster (-copy-to
hdfs://dest/hbase ), it complains while trying to write the data to dest
cluster (due to the hdfs protocol version mismatch). Then I tried running
export on
ok, so you have an empty reference file under .snapshot/SnapshotName/...
what you can do is find all the reference files and replace them with the
version in /hbase/.archive
you can find reference files because are in the form name.regionName
(they are the only files with a dot in the middle)
The Export and Import table tools in HBase. They are slower than snapshot
though. Also, you probably need to come up with something to get around
your hdfs protocol mismatch for this as well.
I attach the code than I'm executing. I don't have accss to the generator
to HBase.
In the last benchmark, simple scan takes about 4 times less than this
version.
With that version is available just to do complete scans.
I have been trying a complete scan of a HTable with 100.000 rows and it
I’ve been successfully moving snapshots from 94 to 98 using webhdfs. On the 94
cluster:
hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot snappy
-copy-to webhdfs://host-on-98-cluster/apps/hbase/data -mappers 12
and then manually fixing the file system layout.
On Sep 16,
Hi guys,
I'm reading these days about migrating from 0.94 to 0.98 and it seems not
so easy.
Unfortunately, I have to update still from 0.92.1-cdh4.1.2 to 0.98 on
CDH5..which is the best way to migrate my data in this specific case?
Best,
Flavio
See this thread:
http://search-hadoop.com/m/DHED43opBM1/Hbase+0.92+to+0.98+subj=Re+0+92+gt+0+96+in+one+go+
On Sep 16, 2014, at 6:11 AM, Flavio Pompermaier pomperma...@okkam.it wrote:
Hi guys,
I'm reading these days about migrating from 0.94 to 0.98 and it seems not
so easy.
Unfortunately,
One row can contain multiple KeyValues.
Did you observe performance degradation on your cluster ?
Cheers
On Sep 16, 2014, at 2:38 AM, Blade Liu hafzc...@gmail.com wrote:
Hi folks,
I feel a little confused about KV mapping. Suppose there are two inserts,
(t, row1,cf1, column1)-v1, (t,
Replying to this thread is getting bounced as spam. Here's the reply I sent
yesterday.
On Mon, Sep 15, 2014 at 7:52 PM, Nick Dimiduk ndimi...@gmail.com wrote:
The explicit JAVA_HOME requirement is new via HBASE-11534.
On Mon, Sep 15, 2014 at 3:16 AM, 牛兆捷 nzjem...@gmail.com wrote:
It works
Hi Flavio,
Since you're using a vendor-supplied distribution of HBase, here are some
docs from that vendor:
* http://tiny.cloudera.com/cdh4_to_cdh5_upgrade
* http://tiny.cloudera.com/cdh4_to_cdh5_hbase_upgrade
If something goes wrong with those instructions, I'd recommend using
vendor-specific
Hi,
I can't find mention of this issue on the Jira. Is it known? I think that
if a split of the HFiles is required, LoadIncrementalHFiles should create
the new HFiles with the correct permissions to be bulk-loaded. Currently it
just hangs because the permissions are wrong.
Here is how I
are you using the SecureBulkLoadEndpoint? that should take care of
permissions
http://hbase.apache.org/book/hbase.secure.bulkload.html
Matteo
On Tue, Sep 16, 2014 at 2:26 PM, Daisy Zhou da...@wibidata.com wrote:
Hi,
I can't find mention of this issue on the Jira. Is it known? I think that
Just a word about some meetups happening this month and next:
On 9/25 at Continuuity in Palo Alto [1], the lads from Continuuity will be
making some nice announcements (some more open-sourcing I believe) with
some sweet demos to boot. Sang Li of FlipBoard will talk about HBase @
Flipboard and
Hi,
We are using Hbase batch API and with 0.98.1 we get the following exception on
using batch() with Increment
org.apache.hadoop.hbase.exceptions.OperationConflictException: The operation
with nonce {5266048044724982303, 5395957753774586342} on row
[rowkey13-20140331] may have
[?] aha. Thanks~
2014-09-17 1:57 GMT+08:00 Nick Dimiduk ndimi...@gmail.com:
Replying to this thread is getting bounced as spam. Here's the reply I
sent yesterday.
On Mon, Sep 15, 2014 at 7:52 PM, Nick Dimiduk ndimi...@gmail.com wrote:
The explicit JAVA_HOME requirement is new via
15 matches
Mail list logo