Hi Michael,
I was having a similar problem and following this thread for any
suggestions. I tried everything suggested and more.
I was trying to run Hadoop/Hbase pseudo distributed version on my Mac. I
initially started with Hadoop 21.0 and Hbase 0.89 versions. I had exactly
the same error that y
Hi, Stack:
Thanks for the explanation. I looked at the code and it seems that the
old region should get compacted
and data older than TTL will get removed. I will do a test with a table with
10 min TTL , and insert several
regions and wait for 1 day, and see if old records will indeed get remo
What is your ifconfig output looking like?
On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott wrote:
> Thanks for the continued advice. I am still confused by the different
> behaviors of hadoop and hbase. As I said before, I can't get hbase to work
> on any of the ports that hadoop works on, so
Thanks for the continued advice. I am still confused by the different
behaviors of hadoop and hbase. As I said before, I can't get hbase to work
on any of the ports that hadoop works on, so I guess hadoop and hbase are
using different interfaces. Why is this, and can't I ask hbase to use the
inte
On Wed, Sep 15, 2010 at 5:50 PM, Jinsong Hu wrote:
> One thing I am not clear about major compaction is that for the regions with
> a single map file,
> will hbase actually load it and remove the records older than TTL ?
Major compactions will run even if only one file IFF this file is not
alread
> so what criterion HBase use to sort the returned result ? By row key ?
Yes, by row key.
- Andy
--- On Wed, 9/15/10, Jeff Zhang wrote:
> From: Jeff Zhang
> Subject: Does HBase guarantee return the same result when I invoke scan
> operation ?
> To: hbase-u...@hadoop.apache.org
> Date: We
Hi all,
I'd like to implement paging for my returned result, So I'd like to
now whether HBase guarantee return the same result when I invoke scan
operation. I know Lucene can guarantee that, because Lucene will
return result order by relevance, so what criterion HBase use to sort
the returned resu
I artificially set TTL to 10 minutes so that I can get the results quicker,
and I don't have to wait for one day to get results. the TTL is set to 600
seconds ( equals 10 minutes) when I did the testing.
In real application, TTL will be set to several months to years.
One thing I am not clear
> I did a test with 2 key structure: 1. time:random ,
> and 2. random:time.
> the TTL is set to 10 minutes. the time is current system
> time. the random is a random string with length 2-10
> characters long.
This use case doesn't make much sense the way HBase currently works. You can
set the
Yeah, indeed the TTL feature is not broken. It works as "advertised" if you
understand how HBase internals work.
But we can accommodate the expectations communicated on this thread, it sounds
reasonable.
- Andy
--- On Wed, 9/15/10, Ryan Rawson wrote:
> From: Ryan Rawson
> Subject: Re:
Hey,
If you bind to localhost you wont actually be reachable by anyone!
The question is why is your OS disallowing binds to a specific
interface/port combo?
HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one that
we wo
The scope of change to compile HBaseFsck.java in 0.20.x is bigger than it
used to.
Here are the errors I got - the last 3 depend on other HBase files.
compile-core:
[javac] Compiling 2 source files to
/Users/tyu/hbase-0.20.5/build/classes
[javac]
/Users/tyu/hbase-0.20.5/src/java/org/apache
Here is the file from my earlier post which compiles in 0.20.5.
On Wed, Sep 15, 2010 at 12:50 PM, Ted Yu wrote:
> The scope of change to compile HBaseFsck.java in 0.20.x is bigger than it
> used to.
> Here are the errors I got - the last 3 depend on other HBase files.
>
> compile-core:
> [ja
If you show us the errors, that would help me understand your situation
better.
HBaseFsck.java has changed a lot since I last tried to compile it.
On Wed, Sep 15, 2010 at 11:06 AM, Sharma, Avani wrote:
> Ted,
>
> I am trying to compile the file and am getting the same errors like you
> mentioned
Hi, ryan:
I did a test with 2 key structure: 1. time:random , and 2. random:time.
the TTL is set to 10 minutes. the time is current system time. the random is
a random string with length 2-10 characters long.
I wrote a test program to continue to pump data into hbase table , with
the time
I feel the need to pipe in here, since people are accusing hbase of
having a broken feature 'TTL' when from the description in this email
thread, and my own knowledge doesn't really describe a broken feature.
Non optimal maybe, but not broken.
First off, the TTL feature works on the timestamp, th
I opened a ticket https://issues.apache.org/jira/browse/HBASE-2999 to track
issue. dropping old store , and update the adjacent region's key range when
all
store for a region is gone is probably the cheapest solution, both in terms
of coding and in terms of resource usage in the cluster. Do we k
This sounds reasonable.
We are tracking min/max timestamps in storefiles too, so it's possible that we
could expire some files of a region as well, even if the region was not
completely expired.
Jinsong, mind filing a jira?
JG
> -Original Message-
> From: Jinsong Hu [mailto:jinsong...
Ted,
I am trying to compile the file and am getting the same errors like you
mentioned and more:
[javac] symbol : method
metaScan(org.apache.hadoop.conf.Configuration,org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor)
[javac] location: class org.apache.hadoop.hbase.client.MetaSca
Yes, Current TTL based on compaction is working as advertised if the key
randomly distribute the incoming data
among all regions. However, if the key is designed in chronological order,
the TTL doesn't really work, as no compaction
will happen for data already written. So we can't say that cur
Hi again,
I think the hbase server master is not starting because it is attempting to
open port 6 on its public IP address, rather than using localhost. I
cannot seem to figure out how to force it (well, configure it) to attempt to
bind to localhost:6 instead. As far as I can see, this
On Wed, Sep 15, 2010 at 9:54 AM, Jinsong Hu wrote:
> I have tested the TTL for hbase and found that it relies on compaction to
> remove old data . However, if a region has data that is older
> than TTL, and there is no trigger to compact it, then the data will remain
> there forever, wasting disk
I have tested the TTL for hbase and found that it relies on compaction to
remove old data . However, if a region has data that is older
than TTL, and there is no trigger to compact it, then the data will remain
there forever, wasting disk space and memory.
It appears at this state, to really re
Hi Jilil,
I am new to HBase. I used filters on three different columns on a single table,
as shown below. The three filters are AND-ed together (must pass all). It
works well for me. Not sure it would help you or not.
FilterList filterList = new FilterList();
if (minClic
Hi
I am using thrift to get data from remote hbase and using
client.get(tableName.getBytes(), rowKey.getBytes(),
"ColFamily:colName".getBytes()); method to get the data for particular
columns. I am using composite row key in the format of *
DataTime_CategoryName_ProductName*. Now my question is th
Hello all,
I am trying to use HBase for a project. The data base will hold
billions of rows.
I am struggling to understand how filters work. In particular, I want
to be able to scan for rows that contain all of these columns for
example:
numbers:three & numbers:five & numbers:seven
I tried t
26 matches
Mail list logo