You need enable LZO compression on the target table (the table you are
importing to), but I assume you did that.
- Original Message -
From: Lord Khan Han
To: user@hbase.apache.org
Cc:
Sent: Saturday, December 10, 2011 10:09 AM
Subject: Re: Hbase export / import Why doubling the Table
Hi,
I'm scanning a relatively large table stored in HBase using pig.
I've got a column family named event_data with 3 columns (tab_event_string,
date and Id).
The table is indexed by a key which has a event code and a time stamp.
Nothing special about this table except for the fact that it is rel
Mikhail,
Thanks for the response. So to summarize compaction requests in the
compaction queue of the failed regionserver are lost and not picked up when the
regions are picked up by another regionserver. So if we lose a regionserver
during a major compaction the regions that had not yet
Hi Andy,
Compaction queues are not persisted between regionserver restarts, and the
results of an incomplete compaction are discarded. Compactions write into
an HFile in a temporary location and only move it to the region/CF
directory in case of success (at least, this is how it works in trunk).
I was wondering if someone could tell me what happens if a regionserver
fails during major compaction. Are the entries that were in the compaction
queue for the failed regionserver migrated along with the regions to another
server or are those requests for major compaction effectively lost
When we exporting from hbase table which is LZO compression on it, the
exported file is decompressed or as is with LZO columns?
On Sat, Dec 10, 2011 at 6:40 PM, Lord Khan Han wrote:
> It is a succes for both lzo snappy. Content is the html document.. Web
> document
>
>
> hbase org.apache.ha
It is a succes for both lzo snappy. Content is the html document.. Web
document
hbase org.apache.hadoop.hbase.util.CompressionTest
hdfs://localhost:8020/user/root/testfile.lzo lzo
11/12/10 18:37:04 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
11/12/10 18:37:04 INFO lzo.LzoCodec: Su
i tried to run the program from eclipse, but during that , i could not see
any job running on the jobtracker/tasktracker web UI pages. i observed that
on the eclipse localJobRunner is executing , so that job is not submitted
to the whole cluster, but its executing on that name node machine alone.
S
Could you use the ComressionTest to verify that the library path is set up
properly?
$ hbase org.apache.hadoop.hbase.util.CompressionTest
hdfs://:8020//test.lzo lzo
Does it report OK? Same for Snappy? The reason I am asking is that when it does
not find the native libs it uses no compression a
Just to add to the following, if it helps in understanding the problem
better...
The build works fine with default 'LocalFileSystem'. Further, the scans
seems to work fine when the region is scanned in a coprocessor. It throws
the exception shown in the following email only when scanned at clie
10 matches
Mail list logo