hbase client threads
probably.
Enis
On Mon, Mar 13, 2017 at 2:57 PM, anil gupta wrote:
I think you need to set that property before you make HBaseConfiguration
object. Have you tried that?
On Mon, Mar 13, 2017 at 10:24 AM, Henning Blohm
wrote:
Unfortunately it doesn't seem to make a diff
wrote:
Are you using Java client ?
See the following in HTable :
public static ThreadPoolExecutor getDefaultExecutor(Configuration conf) {
int maxThreads = conf.getInt("hbase.htable.threads.max", Integer.
MAX_VALUE);
FYI
On Mon, Mar 13, 2017 at 9:14 AM, Henning Blohm
wrote:
Hi
etInt("hbase.htable.threads.max", Integer.
MAX_VALUE);
FYI
On Mon, Mar 13, 2017 at 9:14 AM, Henning Blohm
wrote:
Hi,
I am running an HBase client on a very resource limited machine. In
particular numproc is limited so that I frequently get "Cannot create
native thread"
Hi,
I am running an HBase client on a very resource limited machine. In
particular numproc is limited so that I frequently get "Cannot create
native thread" OOMs. I noticed that, in particular in write situations,
the hconnection pool grows into the hundreds of threads - even when at
most wri
Hi,
I am considering to run the same application for two different tenants
on the same HBase cluster. Instead of changing table names, I would
prefer to make use of the namespace feature. Ideally, I would be able to
only change client connectivity configuration (*-site.xml properties)
and lea
While running with dfs.client.read.shortcircuit set to true I ran into
an OOM on a region server that subsequently died.
Probably this was due to too little direct memory config.
However, after bringing the cluster up again one region of a table got
stuck in transtion. More specifically the ma
On a 10 nodes cluster running Hadoop 2.6 and HBase 1.0 we would like to
be able to restore the whole cluster from a disk snapshot backups.
Given that snapshots for data nodes and name node may not be taken at
the 100% exact same point in time, is there a safe variant that at least
means the cl
How true!! ;-)
Thanks,
Henning
On 18.04.2016 19:53, Dima Spivak wrote:
Probably better off asking on the Hadoop user mailing list (
u...@hadoop.apache.org) than the HBase oneā¦ :)
-Dima
On Mon, Apr 18, 2016 at 2:57 AM, Henning Blohm
wrote:
Hi,
in our Hadoop 2.6.0 cluster, we need to pass
Hi,
in our Hadoop 2.6.0 cluster, we need to pass some properties to all
Hadoop processes so they can be referenced using ${...} syntax in
configuration files. This works reasonably well using
HADOOP_NAMENODE_OPTS and the like.
For Map/Reduce jobs however, we need to speficy not only
mapred.
allel
server-side" should show much better improvements.
Is there anything like that already available that I should look into?
Thanks,
Henning
--
Henning Blohm
*ZFabrik Software GmbH & Co. KG*
T: +49 6227 3984255
F: +49 6227 3984254
M: +49 1781891820
Lammstrasse 2
Right, no big issue in reality. And so far it doesn't seem that deleting
those files has any negative impact on that (demo) installation.
Thanks,
Henning Blohm
*ZFabrik Software KG*
T: +49 6227 3984255
F: +49 6227 3984254
M: +49 1781891820
Lammstrasse 2 69190 Wal
04:00 Henning Blohm :
Nobody?
Well... I will try and see what happens...
Thanks,
Henning
On 08/11/2014 09:28 PM, Henning Blohm wrote:
Lately, on a single node test installation, I noticed that the
Hadoop/Hbase folder /hbase/.corrupt got quite big (probably due to failed
log splitting due to la
directory?
Look at 9.6.5.3.1 in http://hbase.apache.org/book/regionserver.arch.html
They might be corrupt logs files that we can not replay. So might be safe
to remove, but you might have some data lost there...
JM
2014-08-20 10:29 GMT-04:00 Henning Blohm :
Nobody?
Well... I will try and see
Nobody?
Well... I will try and see what happens...
Thanks,
Henning
On 08/11/2014 09:28 PM, Henning Blohm wrote:
Lately, on a single node test installation, I noticed that the
Hadoop/Hbase folder /hbase/.corrupt got quite big (probably due to
failed log splitting due to lack of disk space
Lately, on a single node test installation, I noticed that the Hadoop/Hbase
folder /hbase/.corrupt got quite big (probably due to failed log splitting
due to lack of disk space).
Is it safe to simply delete that folder?
And, what would one possibly do with those problematic WAL logs?
Thanks,
Hen
We operate a solution that stores large amounts of data in HBASE that needs
to be available for online access.
For efficient scanning, there are three pieces of data encoded in row keys
(in particular a time dimension) and for other reasons some columns hold
JSON encoded data.
Currently, analytic
(as well as short column names) are
relevant
bq. Is that also true in 0.94.x?
That is true in 0.94.x
Cheers
On Tue, Jan 14, 2014 at 6:56 AM, Henning Blohm
wrote:
Hi,
for an application still running on Hbase 0.90.4 (but moving to
0.94.6)
we
are thinking about using more efficient composite ro
ed Yu wrote:
Please take a look at HBASE-8089 which is an umbrella JIRA.
Some of its subtasks are in 0.96
bq. claiming that short keys (as well as short column names) are relevant
bq. Is that also true in 0.94.x?
That is true in 0.94.x
Cheers
On Tue, Jan 14, 2014 at 6:56 AM, Henning Blohm
also true in 0.94.x?
That is true in 0.94.x
Cheers
On Tue, Jan 14, 2014 at 6:56 AM, Henning Blohm wrote:
Hi,
for an application still running on Hbase 0.90.4 (but moving to 0.94.6) we
are thinking about using more efficient composite row keys compared what we
use today (fixed length strings
Hi,
for an application still running on Hbase 0.90.4 (but moving to 0.94.6)
we are thinking about using more efficient composite row keys compared
what we use today (fixed length strings with "/" separator).
I ran into http://hbase.apache.org/book/rowkey.design.html claiming that
short keys
I, we're in the process of moving to Apache, so will keep
you posted once the transition is complete.
Thanks,
James
On Fri, Jan 3, 2014 at 1:11 PM, Henning Blohm wrote:
Hi James,
this is a little embarassing... I even browsed through the code and read
it as implementing a region level index.
/forcedotcom/phoenix/wiki/Secondary-Indexing
Thanks,
James
On Fri, Jan 3, 2014 at 12:46 PM, Henning Blohm wrote:
When scanning in order of an index and you use RLI, it seems, there is no
alternative but to involve all regions - and essentially this should happen
in parallel as otherwise you
it is having global indexing only. Correct
James?
RLI impl from Huawei (HIndex) is having some numbers wrt regions.. But I
doubt whether it is there large no# RSs. Do you have some data Rajesh
Babu?
-Anoop-
On Fri, Jan 3, 2014 at 3:11 PM, Henning Blohm
wrote:
Jesse, James, Lars,
after l
of region servers assuming homogeneously
distributed data?
Thanks,
Henning
On 24.12.2013 12:18, Henning Blohm wrote:
All that sounds very promising. I will give it a try and let you know
how things worked out.
Thanks,
Henning
On 12/23/2013 08:10 PM, Jesse Yates wrote:
The work that James
oenixIndexing-SF-HUG_09-26-13.pptx
Thanks,
James
On Mon, Dec 23, 2013 at 3:47 AM, Henning Blohm wrote:
Lars, that is exactly why I am hesitant to use one the core level generic
approaches (apart from having difficulties to identify the still active
projects): I have doubts I can sufficiently explai
also gives a good overview of the pluggability of his
implementation:
http://files.meetup.com/1350427/PhoenixIndexing-SF-HUG_09-26-13.pptx
Thanks,
James
On Mon, Dec 23, 2013 at 3:47 AM, Henning Blohm wrote:
Lars, that is exactly why I am hesitant to use one the core level generic
approaches (
nsactions.
-- Lars
________
From: Henning Blohm
To: user
Sent: Sunday, December 22, 2013 2:11 AM
Subject: secondary index feature
Lately we have added a secondary index feature to a persistence tier
over HBASE. Essentially we implemented what is described as "Dual-Write
ndary
index to HBASE core.
FYI
On Dec 22, 2013, at 2:11 AM, Henning Blohm
wrote:
Lately we have added a secondary index feature to a persistence tier
over HBASE. Essentially we implemented what is described as
"Dual-Write
Secondary Index" in
http://hbase.apache.org/book/seconda
BASE level itself,
but as a toolbox / utility style library.
Is anybody on the list aware of anything useful already existing in that
space?
Thanks,
Henning Blohm
*ZFabrik Software KG*
T: +49 6227 3984255
F: +49 6227 3984254
M: +49 1781891820
Lammstrasse 2 69190 Walldorf
n your logical model you should manage the time dimension yourself.
Otherwise if you identities (i.e. row) with many versions, the builtin
TS might be better.
-- Lars
__**__
From: Henning Blohm
To: user
Sent: Saturday, August 10, 2013 6:26 AM
Subject: Using HBase time
want a new row for each TS in
your logical model you should manage the time dimension yourself.
Otherwise if you identities (i.e. row) with many versions, the builtin TS might
be better.
-- Lars
From: Henning Blohm
To: user
Sent: Saturday, August 10, 2013 6:26 AM
S
(i.e. row) with many versions, the builtin TS might
be better.
-- Lars
From: Henning Blohm
To: user
Sent: Saturday, August 10, 2013 6:26 AM
Subject: Using HBase timestamps as natural versioning
Hi,
we are managing some naturally time versioned data in HBase
Hi,
we are managing some naturally time versioned data in HBase. That is,
there are change events that have a specific time set and when such
event is handled, data in HBase, pertaining to the exact same point in
time, is updated.
So far we are using HBase time stamps to model the time dimen
On 01/19/2012 06:08 PM, Stack wrote:
On Thu, Jan 19, 2012 at 1:31 AM, Henning Blohmwrote:
removing the zookeeper data dir did the trick. The master stopped again
after complaining that it couldn't resolve the old node name (which I was
only able to grep under the dfs data - any idea why the mas
ation on what to do if
you need to rename nodes (in particular master nodes)?
Thanks again!
Henning
On 01/17/2012 09:02 PM, Stack wrote:
On Tue, Jan 17, 2012 at 7:19 AM, Henning Blohm wrote:
Hi,
After an upgrade of hadoop and hbase (to 0.90.4-cdh3u2) from 0.90 hbase and
0.20-append hadoop
Hi,
After an upgrade of hadoop and hbase (to 0.90.4-cdh3u2) from 0.90 hbase
and 0.20-append hadoop on a single node test installation everything
worked fine initially.
Then there was some DNS changes and host name changes which resulted in
a lot "hostname cannot be resolved" problems in the
We had/have the same issue. Posted about it on the hadoop lists. So did
many others.
Yes, you can try to investigate your job code always some harder and may
still find some unfriendly library not shutting down all non-deamon
threads. But that is a non-sensical approach. M/R knows when things are
base-site.xml is also standard, except for the cluster config (i.e. the
zookeeper quorom config etc).
Just noticed that there is a gc log. I will look into that as well.
Currently retrying with 2G heap.
Thanks,
Henning
On 07/11/2011 06:24 PM, Stack wrote:
On Mon, Jul 11, 2011 at 1:04 AM,
in some
likely broken state after the OOM until it eventually dies anyway?
But, on the good side, 0.90.3 is notably faster at writing than 0.20.6.
Thanks,
*Henning Blohm*
*ZFabrik Software KG*
T: +49/62278399955
F: +49/62278399956
M: +49/1781891820
Bunsenstrasse 1
69190 Walld
+1 from here as well
Please let delete work as if it was just a special marker value of a
column (i.e. with a time stamp and all).
On Fri, 2011-01-07 at 19:24 -0800, M. C. Srivas wrote:
> +1
>
> Just a clarification : by delete-forward, do you mean that a delete of a
> non-existent key causes
ize() and
> HTable.setAutoFlush() for details (but please note that you then do
> need to call HTable.flushCommits() in your close() method of the
> mapper class). That will help a lot speeding up writing data.
>
> Lars
>
> On Fri, Nov 19, 2010 at 3:43 PM, Henning Blohm
'd say 20-30 machines and up. Some would say "Use MySQL for
> this little data" but that is not fair given that we do not know what
> your targets are. Bottom line is, you will see issues (like slowness)
> with 3 nodes that 8 or 10 nodes will never show.
>
> HTH,
>
We have a Hadoop 0.20.2 + Hbase 0.20.6 setup with three data nodes
(12GB, 1.5TB each) and one master node (24GB, 1.5TB). We store a
relatively simple
table in HBase (1 column familiy, 5 columns, rowkey about 100chars).
In order to better understand the load behavior, I wanted to put 5*10^8
rows
Henning,
>
> Try doing flush '.META.' and major_compact '.META.' in the hbase
> shell. It worked once for me. I hope it helps.
>
> Cheers,
> hari
>
> On Fri, Nov 12, 2010 at 6:43 PM, Henning Blohm
> wrote:
>
> > Hi again,
> >
&
Hi again,
we have a 1 master, 3 data nodes Hadoop+HBase cluster for a PoC. I ran
into "Too many open files" errors on the region server during load
testing. No problem as such. But now, after shutting down and starting
up again, when trying to count how many rows actually made it,
> bin/hadoo
This is actually a serious question. I need to derive change information
from versioned data
in Hbase. So in particular I need to find out whether a specific column
value was deleted during
some time range.
The way delete works currently, as far as I can tell, the only way to
achieve a "delete"
changed the code accordingly and as far as tests test it correctly it
works. Great!
> Then you can even setCaching to some high number for really fast
> scanning, although deleting will still be the bottleneck.
>
> J-D
Thanks!
Henning
>
> On Tue, Nov 2, 2010 at 3:14 AM, Henning
Hi,
I need to delete a range of rows from an HBase table. A time-to-live
setting as proposed in
http://www.mail-archive.com/hbase-u...@hadoop.apache.org/msg09492.html
will not do as there will be no clear point in time when that clean up
will be required / adviced.
The way it is implemented
Thanks St. Ack
On Wed, 2010-10-20 at 20:22 -0700, Stack wrote:
> On Tue, Oct 12, 2010 at 11:37 AM, Stack wrote:
> > On Tue, Oct 12, 2010 at 2:27 PM, wrote:
> >> It's all running fine and without exceptions before and after killing and
> >> restarting.
> >>
> >> Every removed shutdown hook is
Configuration
objects plus it has addResource(InputStream).
I sent a note to the hadoop mailing list on this.
Thanks,
Henning
On Wed, 2010-10-13 at 18:06 -0400, Stack wrote:
> On Wed, Oct 13, 2010 at 10:09 AM, Henning Blohm
> wrote:
> > Hi,
> >
> > I used
> &g
Hi,
I used
Configuration cfg = new Configuration();
cfg.addResource(InputStream)
on org.apache.hadoop.conf.Configuration to create hadoop configuration.
That configuration is
used to construct HbaseConfiguration objects several times later on:
HbaseConfiguration hbcfg = new HbaseConfigurati
ad clients somehow.
Thanks,
henning
-- Urspr. Mitt. --
Betreff: Re: HBase client shutdown hook hangs
Von: Stack
Datum: 12.10.2010 20:08
On Tue, Oct 12, 2010 at 7:06 AM, Henning Blohm wrote:
> when running an application that interfaces with an HBase installation
> (v. 0.20.6) I notice
Hi,
when running an application that interfaces with an HBase installation
(v. 0.20.6) I noticed that the Hbase shutdown hooks hangs. This does not
happen every time, but frequently. This is the call stack when it hangs:
HCM.shutdownHook:
java.lang.Object.wait(Native Method)
java.lang.Object.w
53 matches
Mail list logo