d217d)
> through the logs of hbase master and the last known regionserver to host
> it. Can you share via pastebin?
>
> -Chien
>
> On Fri, May 6, 2016 at 12:18 PM, Serega Sheypak <serega.shey...@gmail.com>
> wrote:
>
>> Hi, I have HBase cluster running on HBase 1.0.
Hi, I have HBase cluster running on HBase 1.0.0-cdh5.4.4
I do periodically get NotServingRegionException and I can't find the reason
for such exception. It happens randomly on different tables.
*hbase hbase hbck my_weird_table*
*reports*:
ERROR: Region { meta =>
.
2016-02-29 19:19 GMT+01:00 Stack <st...@duboce.net>:
> On Mon, Feb 29, 2016 at 5:32 AM, Serega Sheypak <serega.shey...@gmail.com>
> wrote:
>
> > Hi, did anyone consider running HBase on top of http://www.alluxio.org/
> ?
> > Does it make sense?
> >
>
Hi, did anyone consider running HBase on top of http://www.alluxio.org/ ?
Does it make sense?
the
> BufferedMutator
> > backing buffer fill and auto flush?
> >
> > Be sure to call close when shutting down else whatever is in the
> > BufferedMutator backing buffer will be lost.
> >
> > St.Ack
> >
> > On Wed, Feb 10, 2016 at 12:33 AM, Serega Shey
It helped, thanks.
Now I have single reusable BufferedMutator instance and I don't call
.close() after mutations, I call .flush() Is it ok?
2016-02-09 23:09 GMT+01:00 Stack <st...@duboce.net>:
> On Tue, Feb 9, 2016 at 9:46 AM, Serega Sheypak <serega.shey...@gmail.com>
>
2016-02-09 7:10 GMT+01:00 Stack <st...@duboce.net>:
> On Mon, Feb 8, 2016 at 3:01 PM, Serega Sheypak <serega.shey...@gmail.com>
> wrote:
>
> > Hi, I'm confused with new HBase 1.0 API. API says that application should
> > manage connections (Previousl
List putList = users.collect{toPut(it)}
mutator.mutate(putList)
*mutator.close() // exception here*
}
}
Exception is still thrown
2016-02-09 15:43 GMT+01:00 Stack <st...@duboce.net>:
> On Tue, Feb 9, 2016 at 6:37 AM, Serega Sheypak <sere
)
at
org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:158)
at org.apache.hadoop.hbase.client.BufferedMutator$close$0.call(Unknown
Source)
2016-02-09 18:46 GMT+01:00 Serega Sheypak <serega.shey...@gmail.com>:
> I've modified my code:
>
> void saveUsers(C
Hi, I'm confused with new HBase 1.0 API. API says that application should
manage connections (Previously HConnections) on their own. Nothing is
managed itnernally now.
Here is an example:
https://hbase.apache.org/xref/org/apache/hadoop/hbase/client/example/BufferedMutatorExample.html
It gives no
Serega,
> >
> > Have you tried using Java API of HBase to create table? IMO, invoking a
> > shell script from java program to create a table might not be the most
> > elegant way.
> > Have a look at
> >
> >
> https://hbase.apache.org/devapidocs/org/apach
Hi, is there any easy way/example/howto to run 'create table ' shell script
from java?
Usecase: I'm tired to write table DDL in shell script and in Java for
integration testing. I want to run shell script table DDL from java.
Thanks!
is active?
>
> Thanks!
>
> > On 23 Sep 2015, at 12:14, Serega Sheypak <serega.shey...@gmail.com>
> wrote:
> >
> >> 1. to know which of the HDFS namenode is active
> > add remote cluster HA configuration to your "local" hdfs client
> &
My feeling is that lower requirement for table regions should be:
my_table_region_count > REGION_SERVER_count*3.
Each Region server should get at least one table region, so your read/write
load would be evenly distributed across all region servers in any cases.
*Assumption is that your data is
Hm... you can check underlying table catalog on HDFS and see time
properties there?
2015-08-14 11:54 GMT+02:00 ShaoFeng Shi shaofeng...@gmail.com:
Hello the community,
In my case, I want to cleanup the HTables that older than certain days; so
we need to get the table's creation time, is
Hi. I used PerformanceEvaluation randomWrite for perf measurement.
Here are my metrics:
-- Timers
--
.putTimer
count = 3944591
mean rate = 12389.71 calls/second
1-minute rate = 8853.79
Ok, so I have some problems with put.
randomWrite test shows 28K/sec and 4ms response, 99th percentile
Mine shows 30ms 99th percentile.
I'm doing just htable.put, where Put object is less than 1KB
2015-08-12 9:37 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
I agree.
99% = 112.09
.
-Vlad
On Tue, Aug 11, 2015 at 11:24 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, here is it:
https://gist.github.com/seregasheypak/00ef1a44e6293d13e56e
2015-08-12 4:25 GMT+02:00 Vladimir Rodionov vladrodio...@gmail.com:
Can you post code snippet? Pastbin link is fine
Hi, here is it:
https://gist.github.com/seregasheypak/00ef1a44e6293d13e56e
2015-08-12 4:25 GMT+02:00 Vladimir Rodionov vladrodio...@gmail.com:
Can you post code snippet? Pastbin link is fine.
-Vlad
On Tue, Aug 11, 2015 at 4:03 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Probably
writes. Try to presplit table and avoid splitting after
that. Disable splitting completely
hbase.regionserver.region.split.policy
=org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy
-Vlad
On Tue, Aug 11, 2015 at 3:22 AM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, we
, 2015 at 2:08 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi Vladimir!
Here are graphs. Servlet (3 tomcats on 3 different hosts write to HBase)
http://www.bigdatapath.com/wp-content/uploads/2015/08/01_apps1.png
See how response time jump. I can't explain it. Write load is
really
does not look right.
-Vlad
On Tue, Aug 11, 2015 at 2:08 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi Vladimir!
Here are graphs. Servlet (3 tomcats on 3 different hosts write to HBase)
http://www.bigdatapath.com/wp-content/uploads/2015/08/01_apps1.png
See how response time
Hi, we are using version 1.0.0+cdh5.4.4+160
We have heavy write load, ~ 10K per econd
We have 10 nodes 7 disks each. I read some perf notes, they state that
HBase can handle 1K per second writes per node without any problems.
I see some spikes on writers. Write operation timing jumps from
probably block was being replicated because of DN failure and HBase was
trying to access that replica and got stuck?
I can see that DN answers that some blocks are missing.
or maybe you run HDFS-balancer?
The other thing is that you should always get read access to HDFS by
design, you are not
yourself?
JM
2015-07-19 17:47 GMT-04:00 Serega Sheypak serega.shey...@gmail.com:
Hi, bumped my testing stuff to CDH 5.4.4 and got failure while running
tests. Here is a log
2015-07-19 23:40:18,607 INFO [SERGEYs-MBP:51977.activeMasterManager]
master.TableNamespaceManager
...@cloudera.com:
But do you see that thread printing anything in the logs?
--
Cloudera, Inc.
On Mon, Jul 20, 2015 at 12:07 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
This is testing utiliy, it has few bytes of data to load. Running on
oracle-jdk8
java.lang.Thread.run
omg, looks like -Djava.net.preferIPv4Stack=true helped...
probably some weird macos update...
2015-07-20 23:56 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
I degrade project with hbase testing utility to these versions:
hadoop.version2.5.0-cdh5.2.0/hadoop.version
hadoop.mr.version2.5.0
hadoop.mr.version2.6.0-mr1-cdh5.4.4/hadoop.mr.version
hbase.version1.0.0-cdh5.4.4/hbase.version
it hangs...
2015-07-20 23:27 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
I see these lines:
2015-07-20 21:27:21,791 INFO [RegionOpenAndInitThread-hbase:namespace-1]
regionserver.HRegion
Hi, bumped my testing stuff to CDH 5.4.4 and got failure while running
tests. Here is a log
2015-07-19 23:40:18,607 INFO [SERGEYs-MBP:51977.activeMasterManager]
master.TableNamespaceManager (TableNamespaceManager.java:start(85)) -
Namespace table not found. Creating...
2015-07-19 23:40:18,620
11:40 GMT+00:00 Serega Sheypak serega.shey...@gmail.com
javascript:;:
1. No killer features comparing to hbase
2.terrible!!! Ambari/cloudera manager rulezzz. Netflix has its own tool
for
Cassandra but it doesn't support vnodes.
3. Rumors say it fast when it works;) the reason- it can
http://blog.parsely.com/post/1928/cass/
Here is cool blogpost. I've used hbase for years and once had a project
with Cassandra. Over complicated system with bugs declared as features.
Really there is no reason to use Cassandra.
Describe our task and I can tell you how solve it using hbase
1. No killer features comparing to hbase
2.terrible!!! Ambari/cloudera manager rulezzz. Netflix has its own tool for
Cassandra but it doesn't support vnodes.
3. Rumors say it fast when it works;) the reason- it can silently drop data
you try to write.
4. Timeseries is a nightmare. The easiest
From: Serega Sheypak serega.shey...@gmail.com
To: user user@hbase.apache.org
Sent: Friday, May 22, 2015 3:45 AM
Subject: Re: Optimizing compactions on super-low-cost HW
We don't have money, these nodes are the cheapest. I totally agree that we
need 4-6 HDD
read performance penalty
Enable deferring sync'ing firs
Will try...
2015-05-21 23:04 GMT+03:00 Stack st...@duboce.net:
On Thu, May 21, 2015 at 1:04 AM, Serega Sheypak serega.shey...@gmail.com
wrote:
Do you have the system sharing
There are 2 HDD 7200 2TB each. There is 300GB OS partition
of fixing the hardware and
then you have to start tuning.
On May 21, 2015, at 4:04 PM, Stack st...@duboce.net wrote:
On Thu, May 21, 2015 at 1:04 AM, Serega Sheypak
serega.shey...@gmail.com
wrote:
Do you have the system sharing
There are 2 HDD 7200 2TB each. There is 300GB OS partition
. I'm
not doing banking transactions.
If I disable WAL, could it help?
2015-05-20 18:04 GMT+03:00 Stack st...@duboce.net:
On Mon, May 18, 2015 at 4:26 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, we are using extremely cheap HW:
2 HHD 7200
4*2 core (Hyperthreading)
32GB RAM
, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, any input here?
2015-05-19 2:26 GMT+03:00 Serega Sheypak serega.shey...@gmail.com:
Hi, we are using extremely cheap HW:
2 HHD 7200
4*2 core (Hyperthreading)
32GB RAM
We met serious IO performance issues.
We have more
Maybe you share HTable instance across several threads?
Can you share the code:
1. HTable initialization
2. Htable.put(something)
2015-05-21 11:57 GMT+03:00 Jithender Boreddy jithen1...@gmail.com:
Hi,
I am inserting data from my java application into two HBase tables
back to back. And I am
Hi!
Please help ^)
2015-05-21 11:04 GMT+03:00 Serega Sheypak serega.shey...@gmail.com:
Do you have the system sharing
There are 2 HDD 7200 2TB each. There is 300GB OS partition on each drive
with mirroring enabled. I can't persuade devops that mirroring could cause
IO issues. What arguments
Hi, any input here?
2015-05-19 2:26 GMT+03:00 Serega Sheypak serega.shey...@gmail.com:
Hi, we are using extremely cheap HW:
2 HHD 7200
4*2 core (Hyperthreading)
32GB RAM
We met serious IO performance issues.
We have more or less even distribution of read/write requests. The same
Hi, we are using extremely cheap HW:
2 HHD 7200
4*2 core (Hyperthreading)
32GB RAM
We met serious IO performance issues.
We have more or less even distribution of read/write requests. The same for
datasize.
ServerName Request Per Second Read Request Count Write Request Count
Hi, in 0.94 we could use autoFlush method for HTable.
Now HTable shouldn't be used, we refactoring code for Table
Here is a note:
http://hbase.apache.org/book.html#perf.hbase.client.autoflush
When performing a lot of Puts, make sure that setAutoFlush is set to false
on your Table
functionality as
autoflush under the covers.
On Wed, May 13, 2015 at 9:41 AM, Ted Yu yuzhih...@gmail.com wrote:
Please take a look at https://issues.apache.org/jira/browse/HBASE-12728
Cheers
On Wed, May 13, 2015 at 6:25 AM, Serega Sheypak
serega.shey...@gmail.com
wrote:
Hi
, Serega Sheypak
serega.shey...@gmail.com
wrote:
Hi, in 0.94 we could use autoFlush method for HTable.
Now HTable shouldn't be used, we refactoring code for Table
Here is a note:
http://hbase.apache.org/book.html#perf.hbase.client.autoflush
When performing
AM, Serega Sheypak serega.shey...@gmail.com
wrote:
But HTable is deprecated in 0.98 ...?
2015-05-13 17:35 GMT+03:00 Solomon Duskis sdus...@gmail.com:
The docs you referenced are for 1.0. Table and BufferedMutator were
introduced in 1.0. In 0.98, you should continue using HTable
Hi, I'm going to count activity.
Is it good Idea to use HBase to count?
Count it cause hotspots?
What are the optimisations for HBase to increase counters throughput?
I've found old letter here:
http://search-hadoop.com/m/xfABgvwuvQ1/counter+incrementsubj=Re+on+the+impact+of+incremental+counters
node that is causing issues in the HDFS
pipeline and causing issues.
cheers,
esteban.
--
Cloudera, Inc.
On Thu, Apr 23, 2015 at 1:37 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, is there any input here? What we should monitor?
2015-04-22 20:55 GMT+02:00 Serega Sheypak
Hi, is there any input here? What we should monitor?
2015-04-22 20:55 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
Here is disk stats. Sadness appeared ad 12.30 - 13.00
https://www.dropbox.com/s/lj4r8o10buv1n2o/Screenshot%202015-04-22%2020.48.18.png?dl=0
2015-04-22 20:41 GMT+02:00
Hi, we have 10 nodes cluster running HBase 0.98 CDH 5.2.1
Sometimes HBase stucks.
We have several apps constantly writing/reading data from it. Sometimes we
see that apps response time dramatically increases. It means that app
spends seconds to read/write from/to HBase. in 99% of time it takes
Here is disk stats. Sadness appeared ad 12.30 - 13.00
https://www.dropbox.com/s/lj4r8o10buv1n2o/Screenshot%202015-04-22%2020.48.18.png?dl=0
2015-04-22 20:41 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
Here is an image
2015-04-22 20:40 GMT+02:00 Serega Sheypak serega.shey...@gmail.com
Here is an image
2015-04-22 20:40 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
Here are datanode logs from 5.9.41.237 http://5.9.41.237:50010/,
regionserver logs were from5.9.41.237 http://5.9.41.237:50010/ also
EQUEST_SHORT_CIRCUIT_FDS, blockid: 1078130838, srvID:
659e6be2-8d98-458b
Hi,
what is the reason to backup HDFS? It's distributed, reliable,
fault-tolerant, e.t.c.
NFS should expensive in order to keep TBs of data.
What problem you are trying to solve?
2015-04-09 20:35 GMT+02:00 Afroz Ahmad ahmad@gmail.com:
We are planning to use the snapshot feature that
If I have an application that writes to a HBase cluster, can I count that
the cluster will always available to receive writes?
No, it's CP, not AP system.
so everything get in sync when the other nodes get up again
There is no hinted backoff, It's not Cassandra.
2015-04-07 14:48 GMT+02:00
property
namehbase.master/name
valuehdfs://cluster1:6/value
/property
what is it?
2015-04-07 16:34 GMT+02:00 sridhararao mutluri drm...@hotmail.com:
Hi,
This is my hbase-site.xml:
configuration propertynamehbase.master/name
valuehdfs://cluster1:6/value
though you lost a RS that was handing a specific region.
But because he talked about syncing nodes… I could be misreading his
initial question…
On Apr 7, 2015, at 9:02 AM, Serega Sheypak serega.shey...@gmail.com
wrote:
If I have an application that writes to a HBase cluster, can I count
replicated data. HDFS takes care about data replication.
2015-04-07 18:38 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
Marcelo, if you are comparing with Cassandra:
1. don't think about data replication/redundancy. It's out of HBase scope.
C* thinks about it, HBase doesn't HBase uses
I forgot to set firstRow for Scanner. Looks like HBase tried to scan the
whole table. Value from FilterPrefix wasn't used. I supposed that prefix
value could be pushed to scanner as a starting point, but not.
2015-04-06 18:45 GMT+02:00 Imants Cekusins ima...@gmail.com:
may this be related:
from exact place, it takes millis to get response.
2015-04-06 22:54 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
I forgot to set firstRow for Scanner. Looks like HBase tried to scan the
whole table. Value from FilterPrefix wasn't used. I supposed that prefix
value could be pushed
Hi, I'm trying to use PrefixFilter for the RowKey.
My rowKey consists of 3 parts, actually it's composite.
I do provide first part of key to scan all rows starting from prefix. There
should be less than 10 rowkeys for each prefix, since prefix is md5 hash.
I have itests for this part of code, it
Looks like I didn't set startRow for the scanner...
2015-04-06 17:04 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
Hi, I'm trying to use PrefixFilter for the RowKey.
My rowKey consists of 3 parts, actually it's composite.
I do provide first part of key to scan all rows starting from
Sean, great to see you hear.
2015-03-26 21:06 GMT+03:00 anil gupta anilgupt...@gmail.com:
Congrats, Sean.
On Thu, Mar 26, 2015 at 10:27 AM, Nick Dimiduk ndimi...@gmail.com wrote:
Congratulations Sean! Nice work.
On Thu, Mar 26, 2015 at 10:26 AM, Andrew Purtell apurt...@apache.org
Hi, I have lowcost hardware, 2 HDD, 10 nodes with HBase 0.98 CDH 5.2.1
i have several apps that read/write to HBase using Java api.
Sometimes I see that response time raises from normal 30-40 ms to 1000-2000
ms or even more.
There are no running MapReduce at that time. But there is a bulk load
Hm... So client or sever running openjdk?
четверг, 19 марта 2015 г. пользователь Alok Singh написал:
Looks like ipv6 address is not being parsed correctly. Maybe related
to : https://bugs.openjdk.java.net/browse/JDK-6991580
Alok
On Wed, Mar 18, 2015 at 3:13 PM, Serega Sheypak
serega.shey
PM, Serega Sheypak
serega.shey...@gmail.com wrote:
Hi, I'm trying to use HBaseStorage to read data from HBase
1. I do persist smth to hbase each day using hbase-client java api
2. using HBaseStorage vis oozie
Now I failed to read persisted data using pig script via HUE or plain
pig.
I
with a very simple test and the problem is very
easy to reproduce with jdk8 or jdk7
cheers,
esteban.
--
Cloudera, Inc.
On Thu, Mar 19, 2015 at 4:56 AM, Serega Sheypak serega.shey...@gmail.com
wrote:
I can give 100% we are not using openjdk or ipv6
I see that people felt the same pain before
Hi, I'm trying to use HBaseStorage to read data from HBase
1. I do persist smth to hbase each day using hbase-client java api
2. using HBaseStorage vis oozie
Now I failed to read persisted data using pig script via HUE or plain pig.
I don't have any problem reading data using java client api.
What
disk, bad or dying network adapter, c. Basically HBase is giving
you a hint to go diagnose your cluster.
-n
On Fri, Mar 13, 2015 at 2:44 AM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, finally I met HBase perfomance problems :(
I see this message
Hi, finally I met HBase perfomance problems :(
I see this message:
org.apache.hadoop.hbase.regionserver.wal.FSHLog
Slow sync cost: 345 ms, current pipeline: []
in log file
sometimes [] contains actual addresses of my datanodes.
What are the steps to undestand why HBase is so slow.
I have 7RS and
Did you check how many open connections each ZK server has?
I my hypothesis is that you have ZK connection leaking and ZK server starts
to drop connection to prevent DDoS attack since you hit limit for opened
connections.
2015-02-26 22:15 GMT+03:00 Nick Dimiduk ndimi...@gmail.com:
Can you tell
Create one HConnection for all threads and then share it.
Create HTable in each tread using HConnection.
Do stuff.
Close HTable, DO NOT close HConnection.
It works 100% I did have pretty the same problem. Group helped me to
resolve it the way I suggest you.
2015-02-27 13:23 GMT+03:00 Marcelo
Hi, we are running HBase on super-low-cost HW :)
Sometimes random node goes down. And HBase needs a time to move regions
from failed RS.
What are the practices to:
1. minimize MTTR?
2. is there any possibility to gracefully handle situation when region is
not accessible for r/w?
I can just drop
Hi, Enis Söztutar
You've wrote:
You are right that the constructor new HTable(Configuration, ..) will
share the underlying connection if same configuration object is used.
What do it mean the same? is equality checked using reference (java == )
or using equals(Object other) method?
2015-02-18
You are welcome!
2015-02-17 12:07 GMT+03:00 Vineet Mishra clearmido...@gmail.com:
Thanks Serega!
Don't know how could I miss that, Its working good now!
On Tue, Feb 17, 2015 at 12:41 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
You need to open region server ports. Client
You need to open region server ports. Client directly sends put to
appropriate region server.
вторник, 17 февраля 2015 г. пользователь Vineet Mishra написал:
-- Forwarded message --
From: Vineet Mishra clearmido...@gmail.com javascript:;
Date: Tue, Feb 17, 2015 at 12:32 PM
to support high throughput, so a pool of Connections maybe
better?
Thanks,
Ming
-Original Message-
From: Serega Sheypak [mailto:serega.shey...@gmail.com javascript:;]
Sent: Wednesday, February 04, 2015 1:02 AM
To: user
Subject: Re: managing HConnection
Hi, guys from group helped
Newrelic shows 50K RPM
each request to servlet == 1-3 put/get to HBase. I have mixed workload.
Is it strange :) ?
2015-02-16 10:37 GMT+03:00 David chen c77...@163.com:
5 rpm? I am curious how the result is concluded?
I don't understand you.
There is a single instance of servlet per application.
Servlet.init method called once. Here you can instantiate HConnection and
solve ANY concurrency problems. HConnection is tread-safe. Just don't close
it and reuse.
Then just use HConnection to get HTable.
What problem
It can. 5 rpm, no problem.
понедельник, 16 февраля 2015 г. пользователь David chen написал:
Sorry for the unclear represent.
My problem is that whether or not a sharing Honnection can bear too many
query requests?
in servlets.
2015-02-13 6:56 GMT+03:00 David chen c77...@163.com:
Hi Serega,
I am very interesting in the reason why per application need to create 5
instead of only one HConnection instances during servlet initialization?
At 2015-02-04 01:01:38, Serega Sheypak serega.shey...@gmail.com wrote
);
}
}
}
On Fri, Feb 13, 2015 at 8:57 AM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, really, I can share one Hconnection for the whole application.
It's done by design. I have several servlets. Each servlet has 1-2
controllers working with hbase internally (put/get/e.t.c)
Right now
What's the problem to call HConnectionManager.getConnection in Servlet.init
method and pass it to your class responsible for HBase interaction?
2015-02-13 14:49 GMT+03:00 Sleiman Jneidi jneidi.slei...@gmail.com:
a single HConnection
On Fri, Feb 13, 2015 at 11:12 AM, Serega Sheypak
Hi, guys from group helped me a lot. I did solve pretty the same problem
(CRUD web-app)
1. Use single instance of HConnection per application.
2. Instantiate it once.
3. create HTable instance for each CRUD operation and safely close it
(try-catch-finally). Use the same HConnection to create any
does performance should differ significantly if row value size is small and
we don't have too much versions.
Assume, that a pack of versions for key is less than recommended HFile
block (8KB to 1MB
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/HFile.html),
which is minimal read
Hi, as I mentioned before devops put wrong java (OpenJDK-7) for tomcat.
HBase runs on oracle-jdk-7
I've asked them to set oracle-java for Tomcat. The problem is gone
2015-01-07 10:48 GMT+03:00 Serega Sheypak serega.shey...@gmail.com:
Hm, thanks, I'll check..
2015-01-06 23:31 GMT+03:00
wishes,
Wilm
Am 14.01.2015 um 16:51 schrieb Serega Sheypak:
Hi, I have event-processing system which uses hbase + a pack of tomcat
web-apps as front-end.
tomcat web-apps are similar and used for front-end load-balancing.
tomcat apps write events to hbase.
What is good pattern to show
no it would be 10/100/1
1 is absolute limit. I understand that simple threadsafe java
collection can handle this.
2015-01-14 22:17 GMT+03:00 Serega Sheypak serega.shey...@gmail.com:
Ok, I got it, thanks.
2015-01-14 19:22 GMT+03:00 Wilm Schumacher wilm.schumac...@gmail.com
Hi, I have event-processing system which uses hbase + a pack of tomcat
web-apps as front-end.
tomcat web-apps are similar and used for front-end load-balancing.
tomcat apps write events to hbase.
What is good pattern to show last 10/100/1000 events?
events table schema is:
row_key=user_id
each
+03:00 Shuai Lin linshuai2...@gmail.com:
Works fine for me. Maybe you can try force refresh the page?
On Wed, Jan 14, 2015 at 5:47 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, starting from monday (12.01.2015) I'm getting 404 on ony page of
http://hbase.apache.org
what it could
Hi, starting from monday (12.01.2015) I'm getting 404 on ony page of
http://hbase.apache.org
what it could be?
Sorry for stupid question. I've tried several providers, no luck
should have at least 12GB of free
RAM just for running things smoothly otherwise you might want to try with
less VMs and with some RAM each.
cheers,
esteban.
--
Cloudera, Inc.
On Sun, Jan 11, 2015 at 11:55 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, HBase was down
Hi, I have PoC HBase cluster running on 3 VM
deployment schema is:
NODE01 NN, SN, HMaster (HM), RegionServer (RS), Zookeeper server (ZK), DN
NODE02 RegionServer, DN
NODE03 RegionServer, DN
Suddenly ONLY HBase went offline, all services: HM RS
HDFS was working, no alerts were there
ZK server was
Regards,
Shuai
On Mon, Jan 12, 2015 at 3:47 AM, Serega Sheypak
serega.shey...@gmail.com
wrote:
Hi, I have PoC HBase cluster running on 3 VM
deployment schema is:
NODE01 NN, SN, HMaster (HM), RegionServer (RS), Zookeeper server (ZK),
DN
NODE02 RegionServer, DN
NODE03
yuzhih...@gmail.com:
In 022_zookeeper_metrics.png, server names are anonymized. Looks like only
one server got high number of connections.
Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client ?
Cheers
On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak serega.shey...@gmail.com
wrote
, Serega Sheypak serega.shey...@gmail.com
wrote:
yes, one of them (random) gets more connections than others.
9.3.1.1 Is OK.
I have 1 HConnection for logical module per application and each
ServletRequest gets it's own HTable. HTable closed each tme after
ServletRequest is done
?
Googling, seems like this issue comes up frequently enough. Try it
yourself. If you can't figure something like putting a bound on the
executor, come back here and we'll try and help you out.
St.Ack
On Tue, Jan 6, 2015 at 12:10 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, yes
Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
0.98.6
The root problem is that user threads constantly grows. I do get thousands
of live threads on tomcat instance. Then it dies of course.
please see visualVM threads count dynamics
[image: Встроенное изображение 1]
to be links.
But when I clicked on them, there was no response.
Can you give the links in URL ?
Cheers
On Jan 5, 2015, at 2:39 AM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
0.98.6
The root problem
Hi, here is repost with images link
Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
0.98.6
The root problem is that user threads constantly grows. I do get thousands
of live threads on tomcat instance. Then it dies of course.
please see visualVM threads count dynamics
Hi, the problem is gone.
I did what you say :)
Thanks!
2014-12-13 22:38 GMT+03:00 Serega Sheypak serega.shey...@gmail.com:
Great, I'll refactor the code. and report back
2014-12-13 22:36 GMT+03:00 Stack st...@duboce.net:
On Sat, Dec 13, 2014 at 11:33 AM, Serega Sheypak
serega.shey
was: 200-300 ms per request
now: 80 ms request
request=full trip from servlet to HBase and back to response.
2014-12-15 22:40 GMT+03:00 lars hofhansl la...@apache.org:
Excellent! Should be quite a bit faster too.
-- Lars
From: Serega Sheypak serega.shey...@gmail.com
To: user user
1 - 100 of 161 matches
Mail list logo