Yes, stack .Thank you very much !
On Tue, Jan 25, 2011 at 1:35 AM, Stack wrote:
> Sorry, do you mean build in the below?
>
> For how to checkout hadoop, see this page:
> http://hadoop.apache.org/common/version_control.html
>
> To get the append branch, you'd use the URL
> http://svn.apache.org/r
Not in my experience.
I use [dig -x 10.1.2.3] to test if reverse resolving is working for host ip
10.1.2.3. Test on the box itself and on another box. Then [nslookup
$hostname] works to check for forward resolution (ping works as well but
gives less information about why it gives the answer it g
I will test it again using the filters shipped with hbase itself, and
give a report later.
Thanks
On 01/25/2011 02:30 AM, Stack wrote:
Want to try with one of the filters we ship with to see if it has
same issue? If so, please file an issue. Thats pretty serious bug.
Thanks,
St.Ack
2011/
Hi, does anyone know of any implementation of GeoIndexing on HBase as of
yet?
If not I was thinking of writing one using CoProcessors to increment the
substrings of a GeoHash to help with "number of neighbors" and being able to
filter out points that do not have immediate clusters defined by some
Hi,
Yup after some digging I got to HFileOutputFormat and was relieved to
know that it does support compression. Was able to add code to set
compression based on the column family's compression setting.
Will create a ticket and submit the patch after some more testing and
going over the coding g
On Mon, Jan 24, 2011 at 7:26 PM, Dani Rayan wrote:
> ResultScanner refscanner = table.getScanner(Bytes.toBytes("ColA")); //
> Looks expensive.
> The getscanner operation looks expensive. Am I m(i,e)ssing something ?
This shouldn't be expensive. What happens under the hood is that the
client ma
What you trying to do Dani. There is sample code here if thats of any
good to you:
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/package-summary.html#package_description
St.Ack
On Mon, Jan 24, 2011 at 7:26 PM, Dani Rayan wrote:
> Hi,
>
> Currently I am calling table.getScanner
Hi,
Currently I am calling table.getScanner each time to reset the cursor to
initial row.
My code is something like this:
while (1)
{
/*
* I need a cursor to first row each time.
* Also, I tried storing the ResultScanner into a temp obj to avoid calling
table.getscanner. It didn't work
*
Never mind my second question, host command output seems to match with the
14 additional region servers I have. 14 of them could not be resolved.
Would it make a difference, if I provide IP directly in the regionservers
file.?
On Mon, Jan 24, 2011 at 6:51 PM, charan kumar wrote:
> Hi Ted,
>
>
Hi Ted,
After resetting the ZK Data. Now I have 6 more region servers. :-( .
It has to be mentioned though, after cleaning up , HBase took forever to
start.
If you don't mind, Can you let me know the exact steps to verify if
reverse DNS is setup fine?
Thanks,
Charan
On Mon, Jan 24
Thanks, that fixed my problem. I knew it had to be something I did or didn't
do. I had just copied the code from one of my working apps but its tables
didn't have any columns with empty values.
Thanks again.
-Pete
-Original Message-
From: Ryan Rawson [mailto:ryano...@gmail.com]
Sent:
if you look at the code, in the top of SingleColumnValueFilter, the
javadoc says:
"To prevent the entire row from being emitted if the column is not
found on a row, use setFilterIfMissing. Otherwise, if the column is
found, the entire row will be emitted only if the value passes. If the
value fail
In a scan I setup a filter as follows...
final FilterList filterList = new FilterList();
final String botFilterString = getFilter(BOT_VALUE);
if (botFilterString != null)
{
m_logger.info("Bot Value Filter: " + botFilterString);
final SingleColumn
CompareFilter is just an abstract base of a series of other filters
that compare specific components, what exactly are you having a
problem with?
On Mon, Jan 24, 2011 at 3:53 PM, Peter Haidinyak wrote:
> That's the way I thought it should work. When I setup a filter with data that
> I know sho
That's the way I thought it should work. When I setup a filter with data that I
know shouldn't return any data I still get rows back.
-Original Message-
From: Ryan Rawson [mailto:ryano...@gmail.com]
Sent: Monday, January 24, 2011 3:32 PM
To: user@hbase.apache.org
Subject: Re: Scan Questi
no, if you provide a list of columns in your Scan query and a
particular row does not actually contain _that specific column_, then
the filter does not see anything and nothing for that row is returned.
-ryan
On Mon, Jan 24, 2011 at 3:20 PM, Peter Haidinyak wrote:
> If I have a table where some
If I have a table where some of the columns might not have values in each row
and I do a scan with a CompareFilter.CompareOp.EQUAL type filter on one of
those columns will the scan bring back rows where there is no value in the
column I am comparing on?
Thanks
-Pete
lzop problem...I guess 1 month is enough to forget how hard it is to get a
new install up and running.
Thanks.
On Mon, Jan 24, 2011 at 5:40 PM, Wayne wrote:
> We have gotten past duplicate region listings, and now when creating a new
> table get an error in the shell noserverforregionexception.
We have gotten past duplicate region listings, and now when creating a new
table get an error in the shell noserverforregionexception. We see all 10
nodes up and running and we can see the new tables but the UI does not
respond in showing anything for these new tables and they do not show up in
the
Thanks Xavier. I'll give that a shot.
Norbert
On Mon, Jan 24, 2011 at 1:33 PM, Xavier Stevens wrote:
> Not sure if there is a way to do that. You could get a really rough
> estimate if you did the job I described and subtracted the total bytes
> calculated for the records from the "hadoop fs -
Yes. There is some state in ZK.
If you have a dedicated ZK cluster, you can take down ZK and simply delete
the snapshot and log. Then when you bring ZK back, it is completely clean.
On Mon, Jan 24, 2011 at 12:38 PM, charan kumar wrote:
> How does master know that these servers are running,? A
Its a bug that its needed but yeah, what Ted says.
St.Ack
On Mon, Jan 24, 2011 at 12:59 PM, Ted Dunning wrote:
> Yes. It is a requirement.
>
> On Mon, Jan 24, 2011 at 12:27 PM, Wayne wrote:
>
>> Is reverse dns a requirement with .90? It was not with .89.xxx
>>
>> On Mon, Jan 24, 2011 at 3:17 PM
Yes. It is a requirement.
On Mon, Jan 24, 2011 at 12:27 PM, Wayne wrote:
> Is reverse dns a requirement with .90? It was not with .89.xxx
>
> On Mon, Jan 24, 2011 at 3:17 PM, Wayne wrote:
>
> > We tried to upgrade to .90 and got 2x the nodes listed and saw none of
> our
> > old regions showing
How does master know that these servers are running,? Are these in
Zookeeper?
Could this be a case of entries in Zoo Keeper getting stuck, for what ever
reason? Is there a way to cleanup zookeeper entries?
Thanks,
Charan
On Mon, Jan 24, 2011 at 12:07 PM, charan kumar wrote:
> Hi Ted,
>Than
Is reverse dns a requirement with .90? It was not with .89.xxx
On Mon, Jan 24, 2011 at 3:17 PM, Wayne wrote:
> We tried to upgrade to .90 and got 2x the nodes listed and saw none of our
> old regions showing up in the counts. We assumed the upgrade was not "easy"
> so we just re-formated the HDF
We tried to upgrade to .90 and got 2x the nodes listed and saw none of our
old regions showing up in the counts. We assumed the upgrade was not "easy"
so we just re-formated the HDFS thinking it would fix everything and still
see the same problem. Any suggestions? The duplicate region servers liste
Hi Ted,
Thanks for your response.
I verified both forward and reverse DNS>. Both work as expected.
I did "host " and host "" and ifxonfig and
nslookup, and they all seem good to my eyes.
Let me know, if I need to look for something specific.
Thanks,
Charan
On Fri, Jan 21, 2011 at 10:32
On Mon, Jan 24, 2011 at 9:50 AM, Stack wrote:
> In HFileOutputFormat it says this near top:
>
>// Invented config. Add to hbase-*.xml if other than default
> compression.
>final String compression = conf.get("hfile.compression",
> Compression.Algorithm.NONE.getName());
>
> You might
Not sure if there is a way to do that. You could get a really rough
estimate if you did the job I described and subtracted the total bytes
calculated for the records from the "hadoop fs -dus /hbase/"
bytes. Then that would give an idea of the amount of overhead. I have
a feeling it is negligible
Good idea. But it seems like this approach would give me the size of just
the raw data itself, ignoring any kind of container (like HFiles) that are
used to store the data. What I'd like ideally is to get an idea of what the
fixed cost (in terms of bytes) is for each my tables, and then understan
Norbert,
It would probably be best if you wrote a quick MapReduce job that
iterates over those records and outputs the sum of bytes for each one.
Then you could use that output and get some general descriptive
statistics based on it.
Cheers,
-Xavier
On 1/24/11 9:37 AM, Norbert Burger wrote:
In HFileOutputFormat it says this near top:
// Invented config. Add to hbase-*.xml if other than default compression.
final String compression = conf.get("hfile.compression",
Compression.Algorithm.NONE.getName());
You might try messing with this config?
St.Ack
On Sun, Jan 23, 201
Thank you.
St.Ack
On Sun, Jan 23, 2011 at 4:37 PM, Tost wrote:
> Thanks stack.
> I edited http://wiki.apache.org/hadoop/Hbase after wrote hbase-jdo document.
>
> 2011/1/22 Stack
>
>> Thank you for the sweet addition.
>>
>> On Fri, Jan 21, 2011 at 12:09 AM, Tost wrote:
>> >> I want to contribute
Hi folks - is there a recommended way of estimating HBase HDFS usage for a
new environment?
We have a DEV HBase cluster in place, and from this, I'm trying to estimate
the specs of our not-yet-built PROD environment. One of the variables we're
considering is HBase usage of HDFS. What I've just t
Sorry, do you mean build in the below?
For how to checkout hadoop, see this page:
http://hadoop.apache.org/common/version_control.html
To get the append branch, you'd use the URL
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-append/
when checking out instead of the one suppli
Want to try with one of the filters we ship with to see if it has
same issue? If so, please file an issue. Thats pretty serious bug.
Thanks,
St.Ack
2011/1/24 Yifeng Jiang :
> Hi,
>
> I has a MyFilter class extends FilterBase, and a MyInputFormat extends
> hbase.mapred.TableInputFormat, the dep
Yeah, they are so close. In some API redesigns they had same
ancestor. If you make a patch that adds a Put constructor that takes
a Result, we'll commit it. Seems like Put should take a row and a
list of KVs minimally.
St.Ack
On Mon, Jan 24, 2011 at 12:04 AM, Vishal Kapoor
wrote:
> I think it
Thanks.
-Pete
-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
Sent: Monday, January 24, 2011 9:16 AM
To: user@hbase.apache.org
Subject: Re: IOException from HBase Client
On Mon, Jan 24, 2011 at 8:55 AM, Peter Haidinyak wrote:
> Question: If I
On Mon, Jan 24, 2011 at 8:55 AM, Peter Haidinyak wrote:
> Question: If I want to connect to HBase from a remote client what jars do I
> need on the client side?
>
hadoop, hbase, and zookeeper jars. Looking at your version, I think
you need the guava jar too (In 0.90.0, work was done to undo cli
Sorry, forgot that information
HBase Jar: hbase-0.89.20100924+28.jar
Hadoop Jar: hadoop-core-0.20.2-737.jar
Turns out when I took the hadoop directory out of the classpath it started to
work.
Question: If I want to connect to HBase from a remote client what jars do I
need on the client side?
> This is fixed in CDH3 via HDFS-630:
> https://issues.apache.org/jira/browse/HDFS-630
My bad that's HDFS-611: https://issues.apache.org/jira/browse/HDFS-611
Best regards,
- Andy
Problems worthy of attack prove their worth by hitting back.
- Piet Hein (via Tom White)
--- On Mon, 1/24/1
Martin,
The trouble was due to a defect in how HDFS managed partitioning deletion work
among the datanodes. Especially when under high write load, HBase can post a
lot of deletes due to compactions. Running the balancer just makes it worse --
additional replications into the face of uneven dele
Hello,
in one old thread regarding hadoop/hbase 0.19.x Andrew Purtell wrote,
that running DFS balancer while HBase is running, is not recommended. I
didn't find any remarks about this in Hadoop or HBase documentation.
http://mail-archives.apache.org/mod_mbox/hbase-user/200905.mbox/%3c812604.4
Hi,
I has a MyFilter class extends FilterBase, and a MyInputFormat extends
hbase.mapred.TableInputFormat, the deprecated mapred APIs.
It seems that the filter will not be invoke when there are only a few
data in the table.
This is the code in my InputFormat's configure method.
String startDate
I think it boils down to how can I make a Put out of a Result object
barring the rowid...
thanks,
Vishal
On Mon, Jan 24, 2011 at 2:37 AM, Vishal Kapoor
wrote:
> I have table tableCombined : family 'live', family 'a', family 'b', family
> 'c'
>
> and also have almost static tables below
> tableA
45 matches
Mail list logo