Hi Michel,
Thanks for your reply. I believe your idea works both in theory and
practice. But the problem I worried about does not
lie on the memory usage, but on the network performance. If I query all the
indexed rows from index tables and pull all
of them to client and push them to the temp
Congrats! Seems promising
Mikael.S
On Wed, May 16, 2012 at 8:07 AM, lars hofhansl lhofha...@yahoo.com wrote:
The HBase Team is pleased to announce the release of HBase 0.94.0.
Download it from your favorite Apache mirror [1].
HBase 0.94.0 is wire compatible with 0.92.x releases.
- 0.92
Hello Bryan Oliver,
I am using suggestions from both of you to do the bulk upload. The problem
I am running into is that the job that uses 'HFileOutputFormat.
configureIncrementalLoad' is taking very long to complete. One thing I
noticed is that it's using only 1 Reducer.
When I looked at the
Hello,
When I import data to Hbase, sometimes find the following error.
Please help to see see.
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up
proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to 1/
192.168.0.118:60020 after attempts=1
at
Thanks for the response Ron,
If I'm reading that right, the maximum concurrent RPC requests that an
HBase region server can serve is 10, by default, and adjustable via
this property?
This would have some interesting implications for us IRT to MapReduce jobs.
On Tue, May 15, 2012 at 8:02 PM,
In HBase API, there are classes defined on each level of structs. For
example, HRegionInterface, HRegionInfo (HRegion), Store, StoreFile. I am
not sure why there doesn't seem to have a clear way to traverse this
hierarchical structure.
I can think of some downsides of using HDFS to get such
Yes, it is fixed in CDH4. It will be in the coming release.
Thanks,
Jimmy
On Tue, May 15, 2012 at 5:34 PM, Ted Yu yuzhih...@gmail.com wrote:
Hopefully this gets fixed in
https://repository.cloudera.com/artifactory/public/org/apache/hbase/hbase/0.92.0-cdh4b2-SNAPSHOT/
A developer from
Ok...
I think you need to step away from your solution and take a look at the problem
from a different perspective.
From my limited understanding of Co-processors, this doesn't fit well in what
you want to do.
I don't believe that you want to run a M/R query within a Co-processor.
In short,
On May 16, 2012, at 1:12 AM, fding hbase wrote:
But sadly, HBase ipc doesn't allow coprocessor chaining mechanism...
Someone mentioned on
http://grokbase.com/t/hbase/user/116hrhhf8m/coprocessor-failure-question-and-examples
:
If a RegionObserver issues RPC to another table from any of the
I think we need to look at the base problem that is trying to be solved.
I mean the discussion on the RPC mechanism. but the problem that the OP is
trying to solve is how to use multiple indexes in a 'query'.
Note: I put ' ' around query because its a m/r job or a single thread where the
The way we've done this in one of our tools is to get the list of
regions from .META. then filter them by the tables we want. We then
figure the path all the way up to the column family
colfamilyPath = /table/region/family/
Then we do fs.getFileStatus(colfamilyPath) and get the individual list
With respect to #2, it's more than that. StoreFiles get written on
MemStore flushes but also re-written whenever compactions happen. So
regardless of where you get this information, you have to expect that this
answer is going to change.
On 5/16/12 12:22 PM, Chen Song
Many people will probably try to use coprocessors as a way of implementing
app logic on top of HBase without the headaches of writing a daemon.
Sometimes client-side approaches are inadvisable; for example, there may be
several client languages/runtimes and the app logic should not be
David,
Its not a question of a daemon, its a question of the problem you are trying to
solve.
Using this as an example.. you are not always going to select data from a given
table always using the same query. So you will not always want to use the index
on column A and then the index on
On Wed, May 16, 2012 at 2:40 PM, Dave Revell d...@urbanairship.com wrote:
Many people will probably try to use coprocessors as a way of implementing
app logic on top of HBase without the headaches of writing a daemon.
Sometimes client-side approaches are inadvisable; for example, there may be
Great job!
-邮件原件-
发件人: lars hofhansl [mailto:lhofha...@yahoo.com]
发送时间: 2012年5月16日 13:08
收件人: hbase-user
主题: ANN: HBase 0.94.0 is available for download
The HBase Team is pleased to announce the release of HBase 0.94.0.
Download it from your favorite Apache mirror [1].
HBase 0.94.0 is
On Thu, May 17, 2012 at 6:28 AM, Andrew Purtell apurt...@apache.org wrote:
Are HBase coprocessors explicitly wrong for this use case if the app
logic
needs to access multiple regions in a single call?
Not coprocessors in general. The client side support for Endpoints
(Exec, etc.) gives
On Wed, May 16, 2012 at 6:43 PM, fding hbase fding.hb...@gmail.com wrote:
Not coprocessors in general. The client side support for Endpoints
(Exec, etc.) gives the developer the fiction of addressing the cluster
as a range of rows, and will parallelize per-region Endpoint
invocations, and
Two ways:
1. To close all DNs, use hadoop-daemons.sh (notice the plural). Doing
hadoop-daemons.sh stop datanode will stop all datanodes listed in
slaves file.
2. To close an individual DN, ssh the command on the specific box.
{{ssh dnbox01 hadoop-daemon.sh stop datanode}}.
On Thu, May 17, 2012
Leon, have a look at HBaseWD to solve key
problems: https://github.com/sematext/HBaseWD#readme
Here is a post about it that includes some figures, some performance graphs,
and code:
20 matches
Mail list logo