Hi Mark,
Base on this exception:
1. 1862 [pool-2-thread-1] WARN
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Possibly
transient ZooKeeper, quorum=mark-7:2181,
exception=org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /
I download hbase 0.96, and run it in standalone mode by following the
instruction of here http://hbase.apache.org/book/quickstart.html
But get the following error, could anyone help on this.
2014-01-14 20:51:25,243 WARN [main] zookeeper.ZKUtil: clean znode for
master, quorum=localhost:2181, base
This is probably our most important bit of doc so I'd like to fix it.
Please list the steps you followed. Did you have a previous instance of
HBase up at any time? Did you restart server? Is there a zk process
running at the time of the above start?
Thank you,
St.Ack
On Tue, Jan 14, 2014 at
Please take a look at HBASE-8089 which is an umbrella JIRA.
Some of its subtasks are in 0.96
bq. claiming that short keys (as well as short column names) are relevant
bq. Is that also true in 0.94.x?
That is true in 0.94.x
Cheers
On Tue, Jan 14, 2014 at 6:56 AM, Henning Blohm wrote:
> Hi,
>
>
Hi,
Does anyone have any experience rebuidling the HBASE table to reduce the
number of regions. I am currently dealing with a situation where the no. of
regions per RS have gone up quite significantly (500 per RS) and thereby
causing some performance issues. This is how I am thinking of bringing it
Hi,
for an application still running on Hbase 0.90.4 (but moving to 0.94.6)
we are thinking about using more efficient composite row keys compared
what we use today (fixed length strings with "/" separator).
I ran into http://hbase.apache.org/book/rowkey.design.html claiming that
short keys
If you can afford downtime for your table, there are ways to do it. You can:
- Merge regions (requires table to be disabled atleast in some older versions
and probably in newer ones too)
- Go brute force by doing an export, truncate, import (this is a little more
manageable when you have a large
I have never tried this before but I think the following should work:
1. Alter your table:
habse> alter 't1', METHOD => 'table_att', MAX_FILESIZE => '50'
(place your own number here)
2. Merge regions:
http://hbase.apache.org/book/ops.regionmgt.html
On Tue, Jan 14, 2014 at 7:21 AM, U
Upender:
For 15.2.2 Merge, please note the following condition:
LOG.info("Verifying that HBase is not running...");
try {
HBaseAdmin.checkHBaseAvailable(getConf());
LOG.fatal("HBase cluster must be off-line.");
Cheers
On Tue, Jan 14, 2014 at 10:40 AM, Vladimir Rodionov
wrote
Nice. Ted. Is there any reason why we can't do it online?
On Tue, Jan 14, 2014 at 10:47 AM, Ted Yu wrote:
> Upender:
> For 15.2.2 Merge, please note the following condition:
>
> LOG.info("Verifying that HBase is not running...");
> try {
> HBaseAdmin.checkHBaseAvailable(getConf())
HBASE-7403 provided online merge capability.
The usage is:
HBaseAdmin#mergeRegions
Note: online merge is in 0.96 and above releases.
On Tue, Jan 14, 2014 at 11:00 AM, Vladimir Rodionov
wrote:
> Nice. Ted. Is there any reason why we can't do it online?
>
>
> On Tue, Jan 14, 2014 at 10:47 AM, Te
Hi Henning,
My favorite implementation of efficient composite row keys is Phoenix. We
support composite row keys whose byte representation sorts according to the
natural sort order of the values (inspired by Lily). You can use our type
system independent of querying/inserting data with Phoenix, the
I am trying to understand the interaction of sequenceId and timestamps for
KVs, and what was the real issue behind
https://issues.apache.org/jira/browse/HBASE-6590 which says that bulkload
can be used only to update only historical data and not current data.
Taking an example:
Lets say I have a K
Please take a look at the following method in KeyValueHeap#KVScannerComparator
:
public int compare(KeyValueScanner left, KeyValueScanner right) {
Cheers
On Tue, Jan 14, 2014 at 3:26 PM, Ishan Chhabra wrote:
> I am trying to understand the interaction of sequenceId and timestamps for
> KVs
Thanks for pointing out the code. My understanding is correct.
Thanks!
On Tue, Jan 14, 2014 at 3:40 PM, Ted Yu wrote:
> Please take a look at the following method in
> KeyValueHeap#KVScannerComparator
> :
>
> public int compare(KeyValueScanner left, KeyValueScanner right) {
>
> Cheers
>
>
Hey there,
re: "efficient, correctly ordered, byte[] serialized composite row keys?"
I was the guy behind 7221 and that patch had the first part and the last
part, but not the middle part (correctly ordered) because this patch
relied on the HBase build-in implementations which have the aforemen
I went back from 01-13 all the way
to hbase-lilaifeng-master-CentOS.log.2014-01-07
I saw:
2014-01-07 17:46:05,700 DEBUG
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
Cached location for .META.,,1.1028785192 is Slave2:60020
Do you have region server log for Slave2 ?
We use hbase-0.94.10 and encounter corrupt hfile.
2014-01-11 23:24:16,547 DEBUG org.apache.hadoop.hbase.util.FSUtils:
Creating
file=hdfs://dump002002.cm6:9000/hbase-0.90/cbu2/735414b148ed70e79f4c0406963bb0c9/.tmp/8a4869aafeae43ee8294bf7b65b92e63
with permission=rwxrwxrwx
2014-01-11 23:24:16,550 I
Which hadoop release are you using ?
How many HFiles were corrupted ?
Does lzo work properly or lzo never worked ?
Thanks
On Tue, Jan 14, 2014 at 7:36 PM, 宾莉金 wrote:
> We use hbase-0.94.10 and encounter corrupt hfile.
>
>
> 2014-01-11 23:24:16,547 DEBUG org.apache.hadoop.hbase.util.FSUtils:
Which version of Hadoop?
If you get a data center wide power outage you can lose data.
In Hadoop 1.1.1 or later you force a sync on block close, and thus you won't at
least lose any old data (i.e. HFiles that were recently written due to
compactions).
I have blogged about that here:
http://had
Hi Stack,
here's the step I use,
1. Download hbase 0.96 and untar it
2. Edit the hbase-site.xml as following
hbase.rootdir
/var/hbase
hbase.zookeeper.property.dataDir
/var/zookeeper
3. start hbase ( bin/start-hbase.sh )
Environments:
M
We use cdh4.3.0‘s hadoop and ,and many table use LZO, we just found one
corrupt hfile.
2014/1/15 Ted Yu
> Which hadoop release are you using ?
>
> How many HFiles were corrupted ?
>
> Does lzo work properly or lzo never worked ?
>
> Thanks
>
>
> On Tue, Jan 14, 2014 at 7:36 PM, 宾莉金 wrote:
>
>
hi, ted:
We use cdh4.3.0‘s hadoop ,and many other tables use LZO, we just found one
corrupt hfile.
hi,lars:
We use cdh4.3.0‘s hadoop ,and don't have a power outage, and the
cluster perform
well , the ERROR continues for several days until we find it today.
2014/1/15 lars hofhansl
> Which ver
Hi Folks
We have a table with fixed pattern row key design, the format for the row
key is YEAR_COUNTRY_randomNumber, for example:
20140101_EN_1
20140101_EN_2
20140101_EN_3
20140101_US_1
20140101_US_2
20140101_US_3
...
Is there a way i can quickly get the data for "20140101_EN_*" by using Scan
wi
Please take a look at
http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html
Cheers
On Jan 14, 2014, at 10:16 PM, Ramon Wang wrote:
> Hi Folks
>
> We have a table with fixed pattern row key design, the format for the row
> key is YEAR_COUNTRY_randomNumber, for
Hi Ted
Thanks for the quick reply.
With this FuzzyRowFilter, do i still need to pass in startRow and stopRow
like below when constructing a Scan object?
> Scan(byte [] startRow, byte [] stopRow)
Will the FuzzyRowFilter provide us performance like a directly get by row
when we pass something li
26 matches
Mail list logo