Thanks, i think it really strange too.
Is there some kind of Cache to Filters? For restart the problem may be
sloved, but not every time.
I'll recheck my code later to see if I made anything wrong. If simple code
keeping exist this problem, i will post anthor mail to ask Help.
Thank you Stack and
So is the master alive?
On Fri, Feb 10, 2012 at 12:21 PM, Tomas Tillery wrote:
> I stand corrected - The region servers were still connected because they
> had remained connected. Restarting them resulted in them also receiving a
> Connection refused response. Error log follows.
>
> 12/02/10 20:1
On Fri, Feb 10, 2012 at 12:09 PM, Doug Meil
wrote:
>
> Hi folks-
>
> The HBase Book/RefGuide has been updated
Sweet.
St.Ack
I think it would at least be inconsistent with how we handle
Filter.filterKeyValue(KeyValue), which is done in ScanQueryMatcher after
deleted cells are skipped.
From looking at the RegionScannerImpl.nextInternal code it seems
filterRowKey(byte[]) should not see deleted row keys either, as it sh
I stand corrected - The region servers were still connected because they
had remained connected. Restarting them resulted in them also receiving a
Connection refused response. Error log follows.
12/02/10 20:12:24 INFO zookeeper.ClientCnxn: Opening socket connection to
server master/192.168.1.1:218
Thanks for doing such a great, consistent job providing explanations and
documentation about HBase, Doug!
On Feb 10, 2012, at 2:09 PM, Doug Meil wrote:
Hi folks-
The HBase Book/RefGuide has been updated
http://hbase.apache.org/book.html
In particular, there is now a description of the compact
Hi folks-
The HBase Book/RefGuide has been updated
http://hbase.apache.org/book.html
In particular, there is now a description of the compaction selection
algorithm….
http://hbase.apache.org/book.html#compaction
… many thanks to Nicholas for proving insight into this.
Also, a section for copro
A lot of your design depends on your read/write rate & the amount of
duplication in your inserts. For example, if your read rate is really low
and your write rate is really high with a low dedupe, you could try:
Row = USER_ID
Column Qualifier = PRODUCT_ID
MAX_VERSIONS = 1
Setting the max version
Also, there is a description of what is in META and ROOT in here...
http://hbase.apache.org/book.html#arch.catalog
... and it also describes the startup sequencing.
On 2/10/12 10:46 AM, "Harsh J" wrote:
>The client does communicate with the master to perform .META. changing
>interactions
On Fri, Feb 10, 2012 at 3:21 AM, Tim Robertson
wrote:
> We are using PE scan to try and "standardize" as much as possible.
>
Fair enough.
> Since CDH3u3 is ongoing as I type, I'm not sure on the regions (<50
> regions on 3 RS with the PE TestTable).
>
Why are you not sure? Its just taking a lo
On Fri, Feb 10, 2012 at 1:13 AM, 魏超 wrote:
> I'm using *HBase-0.92.0rc4*. And there is a problem really confuse me.
> I defined a CustomFilter to filter some rows, i override the method "*
> filterRowkey*" to print out which rows the filter has meet.
>
> And there is one row be deleted through JAV
On Fri, Feb 10, 2012 at 4:49 AM, bsnively wrote:
>
> I am trying to test out a POC using HBase -- and am trying to add a bloom
> filter to a table that already exists.
>
> The way I'm trying to add it seems to keep complaining in the hbase shell --
> and I can find any detailed steps of what I'm d
The client does communicate with the master to perform .META. changing
interactions (create/delete tables, etc.). And for the rest, like
locating regions off regionservers and reading off them, the master
isn't touched (afaik).
The master's work is also more about running/providing cluster
managem
Thanks for your reply. If the -ROOT- and .META. tables are managed by
two RegionServer separately. What is the functionality of Master node?
It only assigns the Region node in the cluster? So, the client only
needs to contact with these two special RegionServer which contains
the -ROOT- and .META.
I am trying to test out a POC using HBase -- and am trying to add a bloom
filter to a table that already exists.
The way I'm trying to add it seems to keep complaining in the hbase shell --
and I can find any detailed steps of what I'm doing wrong.
I was trying to do alter 'eventTable', {BLOOMFI
Hi,
On Wed, Feb 8, 2012 at 8:22 PM, Ted Yu wrote:
> Looking at src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java:
> public int scannerOpenWithPrefix(ByteBuffer tableName,
> ByteBuffer startAndPrefix,
> List co
Hello,
On Wed, Feb 8, 2012 at 3:25 PM, Wojciech Langiewicz
wrote:
> Hi,
> AFAIK this is not possible, unless you are using HBase 0.92 with
> coprocessors ( https://blogs.apache.org/hbase/entry/coprocessor_introduction
> ), but even then I really doubt this feature will be included in Thrift API
>
The HMaster does not host regions, and the -ROOT- is a region; It is
hosted by one of the assigned RegionServers, and its location is
registered under ZooKeeper. The -ROOT- region then holds the location
of the .META. (Which again, is another region, and is hosted by
RegionServers in just the same
Thanks!
I know this. I just want to know which nodes store this information
when the client first contact to HBase cluster, HMaster or
RegionServer or a special node in which runs the zookeeper.
And the other question is whether zookeeper runs on the same nodes as
Hbase in the cluster or it runs
> Is HIVE involved? Or is it just raw scan compared to TFIF?
No Hive
> Is this a MR scan or just a shell serial scan (or is it still PE?)?
We are using PE scan to try and "standardize" as much as possible.
> You want to get this scan speed up only? You are not interested in figuring
> how
>
To my knowledge, it is a three level tree-like structure.
--
该邮件从移动设备发送
-- Original --
From: "yonghu"
Date: Fri, Feb 10, 2012 07:12 PM
To: "user";
Subject: Which server store the root and .meta. information?
Hello,
I read some articles which ment
Hello,
I read some articles which mention before the client connect to the
master node, he will first connect to the zookeeper node and find the
location of the root node. So, my question is that the node which
stores the root information is different from master node or they are
the same node?
T
just use
*Bytes.toBytes(System.currentTimeMillis())* or some like *Bytes.toBytes(new
Date(someTime).getTime())* as rowkeys(you can even append some other datas
before or after the time)
when u seach, just use
*scan.setStartRow(Bytes.toBytes(new Date("specifie start-time").getTime()));
*
*scan.setE
Maybe you can build a index-table, like
rowkey:[USER_ID/ProductID] = { rk => main-table's rowkey}
when view a product, check Index, find the rk, use the rk to get row from
Main-talbe. delete this row, modify index-talbe's rk.
of cause, use coprocessor to handle this may make it simple...
201
I'm using *HBase-0.92.0rc4*. And there is a problem really confuse me.
I defined a CustomFilter to filter some rows, i override the method "*
filterRowkey*" to print out which rows the filter has meet.
And there is one row be deleted through JAVA client:
*htable.delete(new Delete(theRowkey))*
but
Let me rephrase my question a bit: what common pattern of row key for
time-series data? OpenTSDB schema is a common reference, but it's limited
to per-metric time-series data.
Regards,
Alex
2012/2/9 Alex Vasilenko
> Ted,
>
> Scan#setTimeRange filters columns, not rows, I think it's not optimal
I'm using *HBase-0.92.0rc4*. And there is a problem really confuse me.
I defined a CustomFilter to filter some rows, i override the method "*
filterRowkey*" to print out which rows the filter has meet.
And there is one row be deleted through JAVA client:
*htable.delete(new Delete(theRowkey))*
but
27 matches
Mail list logo