Hello,
I am working hard to upgrade the HBase cluster these days.
I want to get some advice about HBase configuration.
(My department plan is to make a new hbase cluster running version is 2.2 and
fade-out 1.2 hbase cluster.)
Now, I should compare the HBase configuration between 1.2 and 2.2.4.
Additional information)
I am currently using hbase-1, but I am preparing to upgrade to hbase-2.
2020. 3. 17. 11:51, Kang Minwoo
mailto:minwoo.k...@outlook.com>> 작성:
Thank you for kindly reply.
I think the solution you gave me is really good.
I didn't know that before, so I took
it seems but is available to you in hbase1).
S
2.
https://github.com/saintstack/hbase/blob/branch-1.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java#L51
On Sun, Mar 1, 2020 at 8:49 PM Kang Minwoo
mailto:minwoo.k...@outlook.com>> wrote:
HBase versi
, Feb 24, 2020 at 8:20 PM Kang Minwoo wrote:
> Hello Users.
>
> Is there any way to check the system stop is requested in
> performCompaction over time?
>
> When the region got a close request, the region should wait there is no
> compaction and flush.
> However, in performC
Hello Users.
Is there any way to check the system stop is requested in performCompaction
over time?
When the region got a close request, the region should wait there is no
compaction and flush.
However, in performCompaction method checked periodically only by write bytes.
If write bytes is too
Hello, Users.
I use HBase Version 1.2.9.
However, Version 1.2.9 reached EOL. I prepare the HBase Major version upgrade.
In my case, Every clients' version is 1.2.9.
There are too many clients. I cannot upgrade the clients' major version.
Therefore old client that is using 1.2.9 connect new HBase
I looked around Apache Omid and Apache Tephra.
It seems like the dead.
Are there projects improving?
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2020년 1월 10일 금요일 15:37
받는 사람: hbase-user
제목: Re: How to avoid write hot spot, While using cross row
nding
delete requests.
Best regards,
Minwoo Kang
보낸 사람: ramkrishna vasudevan
보낸 날짜: 2020년 1월 30일 목요일 14:07
받는 사람: Kang Minwoo
참조: Hbase-User; Stack
제목: Re: Extremely long flush times
Hi Minwoo Kang
Any updates here? Where you able to over come the issue
is negative).
>
> Last but not lease, what about trying Phoenix?
>
>
>
> --
>
> Best regards,
> R.C
>
>
>
>
> From: Kang Minwoo
> Sent: 10 January 2020 12:51
> To: user@hbase.a
the boolean to be set then it resets if not the scan just
goes on .
Regards
Ram
On Fri, Jan 10, 2020 at 10:01 AM Kang Minwoo
wrote:
> Thank you for reply.
>
> All Regions or just the one?
> => just one
>
> Do thread dumps lock thread reading against hdfs every time you take
Hello, users.
I use MultiRowMutationEndpoint coprocessor for cross row transactions.
It has a constraint that is rows must be located in the same region.
I removed random hash bytes in the row key.
After that, I suffer write hot-spot.
But cross row transactions are a core feature in my applicatio
one?
Is it always inside in updateReaders? Is there a bad file or lots of files
to add to the list?
Yours,
S
On Thu, Jan 2, 2020 at 8:34 PM Kang Minwoo wrote:
> Hello Users,
>
> I met an issue that is flush times is too long.
>
> MemStoreFlusher is waiting for
Hello Users,
I met an issue that is flush times is too long.
MemStoreFlusher is waiting for a lock.
```
"MemStoreFlusher.0"
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x7f0412bddcb8> (a
java.util.concurrent.lock
Thanks, I want to build full tarball with site doc.
So I modified my script three build steps.
The build was a success.
보낸 사람: Stack
보낸 날짜: 2019년 7월 16일 화요일 00:48
받는 사람: Hbase-User
제목: Re: Failed to create assembly
On Mon, Jul 15, 2019 at 2:31 AM Kang
assembly:single
---
보낸 사람: Kang Minwoo
보낸 날짜: 2019년 7월 15일 월요일 18:31
받는 사람: user@hbase.apache.org
제목: Failed to create assembly
Hello Users,
I try to build HBase 2.1.5 but I got a failure on Apache HBase - Assembly
Project.
Error Message is below.
---
[INFO
Hello Users,
I try to build HBase 2.1.5 but I got a failure on Apache HBase - Assembly
Project.
Error Message is below.
---
[INFO] Apache HBase - External Block Cache SUCCESS [ 1.750 s]
[INFO] Apache HBase - Assembly FAILURE [ 11.643 s]
[INFO] Apach
aded/hbas
> e-shaded-client/target/maven-shared-archive-resources/META-INF/LICENSE
>
> And then search for the details around the text "ERROR"
>
> On Fri, Jul 12, 2019, 00:46 Kang Minwoo wrote:
>
> > I try to build HBase 2.1.5
> > error is..
> >
> > [
using)
On Thu, Jul 11, 2019 at 11:38 PM Kang Minwoo wrote:
>
> Hello, User.
>
> While I build HBase from the source. I got an error that is License errors
> detected, for more detail find ERROR in
> /hbase-shaded/hbase-shaded-client/target/maven-shared-archive-resources/META
Hello, User.
While I build HBase from the source. I got an error that is License errors
detected, for more detail find ERROR in
/hbase-shaded/hbase-shaded-client/target/maven-shared-archive-resources/META-INF/LICENSE.
My build environment is CentOS 6.3
Command is mvn -DskipTests -Dslf4j.version
ounds like a bug.
On Tue, May 28, 2019 at 9:39 PM Kang Minwoo wrote:
> Hello, Users.
>
> I use JBOD for data node. Some times the disk in the data node has a
> problem.
>
> The first time, I shut down all instance include data node and region
> server in the machine that has a
Hello, Users.
I use JBOD for data node. Some times the disk in the data node has a problem.
The first time, I shut down all instance include data node and region server in
the machine that has a disk problem.
But It is not a good solution. So I improve the process.
When I detect disk problem in
configs, so usually the
code will be
HTableDescriptor htd = admin.getTableDescriptor(tableName);
htd.setCoprocessor or htd.setValue
admin.modifyTable(htd);
Kang Minwoo 于2019年5月15日周三 上午10:51写道:
> Thanks! I don't know that.
> HBaseAdmin.modifyTable method looks like overwrite t
we do not provide such method... Maybe it is a bit difficult to
control the uploading part?
Kang Minwoo 于2019年5月14日周二 下午3:41写道:
> Thank you for your reply.
>
> I tried to update the table descriptor using set
> HTableDescriptor#setValue(byte[], byte[]).
> the table descriptor
place, and then enable the table.
Or another way is to upload the coprocessor jar to another place, and
update the table descriptor to point to the new place. I think this could
be done by code, as you can completely replace the old coprocessor config.
Not sure if this is easy to do through shell.
Hello Users,
When I load a dynamic coprocessor, If the table already has the same class
coprocessor, coprocessor fails to load.
Because the same class coprocessor cannot load.
So I should unload old version coprocessor before load new version coprocessor.
But coprocessor has a mission-critical
Hello Users,
When I load a dynamic coprocessor, If the table already has the same class
coprocessor, coprocessor fails to load.
Because the same class coprocessor cannot load.
So I should unload old version coprocessor before load new version coprocessor.
But coprocessor has a mission-critical
/browse/HBASE-17170
On Tue, May 7, 2019 at 10:33 AM Josh Elser wrote:
> Sounds like a bug to me.
>
> On 5/7/19 5:52 AM, Kang Minwoo wrote:
> > Why do not use "doNotRetry" value in RemoteWithExtrasException?
> >
> > ________
> >
Why do not use "doNotRetry" value in RemoteWithExtrasException?
____
보낸 사람: Kang Minwoo
보낸 날짜: 2019년 5월 7일 화요일 18:23
받는 사람: user@hbase.apache.org
제목: Why HBase client retry even though AccessDeniedException
Hello User.
(HBase version: 1.2.9)
Rece
Hello User.
(HBase version: 1.2.9)
Recently, I am testing about DoNotRetryIOException.
I expected when RegionServer send a DoNotRetryIOException (or
AccessDeniedException), Client does not retry.
But, In Spark or MR, Client retries even though they receive
AccessDeniedException.
Here is a cal
?
Em seg, 11 de mar de 2019 às 04:22, Kang Minwoo
escreveu:
> Hello Users.
>
> ---
> HBase version is 1.2.9
> ---
>
> I wonder this region operation is intended.
>
> I set "hbase.regionserver.optionalcacheflushinterval" slightly shorter
> than the defa
Hello Users.
---
HBase version is 1.2.9
---
I wonder this region operation is intended.
I set "hbase.regionserver.optionalcacheflushinterval" slightly shorter than the
default setting.
So cf has old edit, they flush after a random delay.
If the flush queue has a flush request by old edit, a fl
Hello, Users.
I wonder what is the benefit use HBase Spark Connector instead of
TableInputFormat.
Best regards,
Minwoo Kang
I found what is a problem.
This is because of HBASE-18665[1].
Best regards,
Minwoo Kang
[1]: https://issues.apache.org/jira/browse/HBASE-18665
보낸 사람: Kang Minwoo
보낸 날짜: 2019년 2월 28일 목요일 11:33
받는 사람: user@hbase.apache.org
제목: Re: HBase client spent most
a Region from meta should not take more than a second.
On 2/27/19 12:34 AM, Kang Minwoo wrote:
> MetaScan is so slow.
> When I invoked `regionLocator.getAllRegionLocations()` method, It throw
> `org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the
> loc
MetaScan is so slow.
When I invoked `regionLocator.getAllRegionLocations()` method, It throw
`org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the
locations` Exception.
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2019년 2
something to do.
You should include threads like these from your analysis.
On 2/26/19 8:32 AM, Kang Minwoo wrote:
> Hello Users,
>
> I have a question.
>
> My client complains to me, HBase scan spent too much time.
> So I started to debug.
>
> I profiled the HBase Client
Hello Users,
I have a question.
My client complains to me, HBase scan spent too much time.
So I started to debug.
I profiled the HBase Client Application using hprof.
The App spent the most time in below stack trace.
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:Unknown
backporting the patch to branch-1. There just a few rejects. Would
you be up for creating a backport issue and attaching a version of this
patch that fit branch-1?
Thanks,
S
On Thu, Jul 19, 2018 at 10:12 PM Kang Minwoo
wrote:
> Hello, Users
>
> Our filter is row key filter. So scan time
Hello, Users
Our filter is row key filter. So scan time limit not work.
the related issue is https://issues.apache.org/jira/browse/HBASE-19818
Is it possible HBASE-19818 backport 1.2.x. branch?
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2018년
Hello,
I am not clear about heartbeats.
After HBase introduce Progress heartbeats for long running scanners[1], I think
client does not get SocketTimeoutException.
Because client know that scanner in region server is still working.
But When I executed scan with filter (many consecutive filtere
hbase.client.scanner.timeout.period and
hbase.rpc.timeout?
Please refer to our refguide
<http://hbase.apache.org/book.html#config_timeouts> or HBASE-17449
<http://hbase.apache.org/book.html#config_timeouts> for details. Hope this
information helps.
Best Regards,
Yu
On 16 July 2018 at 14:20, Kang Minwoo wro
Hello, All.
What is the difference between hbase.client.scanner.timeout.period and
hbase.rpc.timeout?
If I don't want to timeout while long running scan, should I do not set
hbase.client.scanner.timeout.period?
Best regards,
Minwoo Kang
thread is running (or stuck) somewhere, so the
close region thread can't obtain the write lock. You can look closely in
your thread dump.
The handler thread you pasted above it is just a thread can't obtain the
read lock since the close thread is trying write lock.
Best Regards
Allan Yang
Hello.
Occasionally, when closing a region, the RS_CLOSE_REGION thread is unable to
acquire a lock and is still in the WAITING.
(These days, the cluster load increase.)
So the Region state is PENDING_CLOSE persists.
The thread holding the lock is the RPC handler.
If you have any good tips on mov
Hello, Everyone
When I check HBase compactionQueueLength metric, Some RegionServer's
compactionQueueLength is too high.
So, I check the RegionServer log.
There is "regionserver.CompactSplitThread: Small Compaction requested: system;
Because: MemStoreFlusher; compaction_queue=(8034:1), split_que
(tableName), scan)
rdd.count()
or use a Spark-HBase connector which encapsulates the details
Regards
On Sat, Jun 9, 2018 at 8:48 AM, Kang Minwoo wrote:
> 1) I am using just InputFormat. (I do not know it is the right answer to
> the question.)
>
> 2) code snippet
>
받는 사람: hbase-user
제목: Re: Odd cell result
Which connector do you use for Spark 2.1.2 ?
Is there any code snippet which may reproduce what you experienced ?
Which hbase release are you using ?
Thanks
On Fri, Jun 8, 2018 at 1:50 AM, Kang Minwoo wrote:
> Hello, Users
>
> I recent
Hello, Users
I recently met an unusual situation.
That is the cell result does not contain column family.
I thought the cell is the smallest unit where data could be transferred in
HBase.
But cell does not contain column family means the cell is not the smallest unit.
I'm wrong?
It occurred in
mapreduce you can create a custom TableInputFormat that generates one
split per region (or per prefix) with the salted ranges and pass that to
your job configuration (e,g, for mapduce or spark).
On Thu, Jun 7, 2018 at 4:05 AM, Kang Minwoo wrote:
> Sorry for the late reply.
>
> The row key str
: Re: How to improve HBase read performance.
HBASE performance highly dependant on query & row key format.
Can you share few rowkeys, query format? also what is encoding you are
using?
On Thu, May 24, 2018 at 8:38 PM, Kang Minwoo
wrote:
> 5B logs a day?
> => Yes, 5B/day
>
>
I left a comment on JIRA.
( https://issues.apache.org/jira/browse/HBASE-15871 )
Best regards,
Minwoo Kang
보낸 사람: Sean Busbey
보낸 날짜: 2018년 5월 29일 화요일 23:12
받는 사람: user@hbase.apache.org
제목: Re: can not write to HBase
On Tue, May 29, 2018 at 1:25 AM, Kang
orean. ^^)
-
https://www.evernote.com/shard/s167/sh/39eb6b44-25e7-4e61-ad2a-a0d1b076c7d1/159db49e3e49b189
Best regards,
Jeongdae Kim
김정대 드림.
On Thu, May 24, 2018 at 1:22 PM, Kang Minwoo
wrote:
> I have a same error on today.
> thread dump is here.
>
>
>
> Thread
gards,
Minwoo Kang
보낸 사람: Stack 대신 saint@gmail.com
보낸 날짜: 2018년 5월 24일 목요일 01:33
받는 사람: Hbase-User
제목: Re: How to improve HBase read performance.
On Wed, May 16, 2018 at 7:30 PM, Kang Minwoo
wrote:
> Here is information.
>
> store about 5 billion a day.
>
,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2018년 5월 23일 수요일 16:53
받는 사람: Hbase-User
제목: Re: can not write to HBase
In HRegion#internalFlushCacheAndCommit
There is following code.
synchronized (this) {
notifyAll(); // FindBugs NN_NAKED_NOTIFY
}
one
In HRegion#internalFlushCacheAndCommit
There is following code.
synchronized (this) {
notifyAll(); // FindBugs NN_NAKED_NOTIFY
}
one question.
Where is the lock acquired?
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2018년 5월 23일
.
Best Regards,
Yu
On 23 May 2018 at 14:19, Kang Minwoo wrote:
> @Duo Zhang
> This means that you're writing too fast and memstore has reached its upper
> limit. Is the flush and compaction fine at RS side?
>
> -> No, flush took very long time.
> I attach code that took
rver/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L2424-L2508
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2018년 5월 23일 수요일 15:16
받는 사람: user@hbase.apache.org
제목: Re: can not write to HBase
I am using salt, prevent write hotspot.
A
regions for that table?
Sent from my iPhone
> On May 22, 2018, at 9:52 PM, Kang Minwoo wrote:
>
> I think hbase flush is too slow.
> so memstore reached upper limit.
>
> flush took about 30min.
> I don't know why flush is too long.
>
limit. Is the flush and compaction fine at RS side?
2018-05-23 10:20 GMT+08:00 Kang Minwoo :
> attach client exception and stacktrace.
>
> I've looked more.
> It seems to be the reason why it takes 1290 seconds to flush in the Region
> Server.
>
> 2018-05-23T07:24:31
2018-05-23 8:17 GMT+08:00 Kang Minwoo :
> Hello, Users
>
> My HBase client does not work after print the following logs.
>
> Call exception, tries=23, retries=35, started=291277 ms ago,
> cancelled=false, msg=row '{row}' on table '{table}' at region={region},
&
Hello, Users
My HBase client does not work after print the following logs.
Call exception, tries=23, retries=35, started=291277 ms ago, cancelled=false,
msg=row '{row}' on table '{table}' at region={region}, hostname={hostname},
seqNum=100353531
There are no special logs in the Master and Regi
his ticket: https://issues.apache.org/jira/browse/HBASE-20459 was fixed
> in
> the latest version of HBase, upgrading to latest may help with performance
>
> On Wed, May 16, 2018 at 3:55 AM, Kang Minwoo
> wrote:
>
> > Hi, Users.
> >
> > I store a lot of logs in HBase
Hi, Users.
I store a lot of logs in HBase.
However, the reading speed of the log is too slow compared to the Hive ORC file.
I know that HBase is slow compared to the Hive ORC file.
The problem is that it is too slow.
HBase is about 6 times slower.
Is there a good way to speed up HBase's reading s
Hello Users,
I am looking forward to releasing HBase 1.2.7.
Do you know when 1.2.7 will be released?
Best regards,
Minwoo Kang
Hello, All
I changed Hadoop name node manually. Because I had some reason.
After that some region server down.
Error logs below
HBase version: 1.26
Hadoop version: 2.7.3
Caused by: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot
create file ${hbase hdfs path}. Name node is in s
Hello Users,
These days, I settings new HBase cluster.
While I am testing HBase cluster, I find compaction queue constantly increase.
(over 9000)
I am worried about this situation.
I should like to have the benefit of your advice.
Best regards,
Minwoo Kang
error occurred
pastebin of more of the region server prior to the StackOverflowError
(after redaction)
release of hadoop for the hdfs cluster
non-default config which may be related
Thanks
On Sat, Jan 6, 2018 at 4:36 PM, Kang Minwoo wrote:
> Hello,
>
> I have met StackOverflowError
Hello,
I have met StackOverflowError in region server.
Detail Error log here...
HBase version is 1.2.6
DAYS:36,787 DEBUG [regionserver/longCompactions]
regionserver.CompactSplitThread: Not compacting xxx. because compaction request
was cancelled
DAYS:36,787 DEBUG [regionserver/shortCompactions
My team member tries to resolve this issue.
And He found it is a Hadoop Issue.
He tries to test purging FD.
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2017년 8월 11일 금요일 오전 11:18:58
받는 사람: user@hbase.apache.org
제목: Region Server does not close FD
Hello, HBase Users.
These days, My team upgrade HBase Version up to 1.2.6 and test in our product.
In the meantime, My team improves real machine Fault handling.
My server is JBOD. So My team decided using data node volume failures
thresholds in HDFS Feature.
My team set an option that is data n
();
for (String replicator: replicators) {
replicators seems to be null.
Can you log a JIRA (and attach redacted log if possible) ?
Cheers
On Thu, Jul 6, 2017 at 5:10 PM, Kang Minwoo wrote:
> I am using 1.2.5 (revision=d7b05f79dee10e0ada614765bb354b93d615a157)
>
> Yes,
I am using 1.2.5 (revision=d7b05f79dee10e0ada614765bb354b93d615a157)
Yes, I see NPE repeatedly.
This occurs every minute.
Should I fix it?
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2017년 7월 6일 목요일 오전 9:29:35
받는 사람: user@hbase.apache.org
제목
Hello, HBase Users.
While I am watching HMaster log, I found NullPointerException Logs.
2017-07-06 09:05:02,579 DEBUG [,1498445640728_ChoreService_1]
cleaner.CleanerChore: Removing: hdfs://*** from archive
2017-07-06 09:05:02,585 ERROR [,1498445640728_ChoreService_2]
hbase.ScheduledCh
#L110)?
Which release are you using ?
Maybe related: HBASE-13592
On Sun, Mar 19, 2017 at 8:22 PM, Kang Minwoo
wrote:
> Yes, It happened in my cluster.
>
>
> [RegionServer LOG]
>
> 2017-03-20 11:02:21,466 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog:
> Couldn't
ption occur when region server is closing
(CloseRegionHandler.java#L110)?
See HBASE-4270
Did you see this happen in your cluster ?
If so, mind sharing related log snippets ?
Cheers
On Sun, Mar 19, 2017 at 7:50 PM, Kang Minwoo
wrote:
> Hello!
>
> In this code (https://github.com/apa
Hello!
In this code
(https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java#L110),
Region server can occur IOException, When they are closing.
Why IOException occur here?
If I want to know specific reason, Where
Hi, I am Minwoo.
I am interesting in HBase Architecture.
So These Days I read Architecting HBase Applications.
Hi, I am Minwoo.
I am interesting in HBase Architecture.
So I read Architecting HBase Applications.
In the book, HBase 2.0 is work in progress to reduce its dependency on
ZooKeeper.
I want to the reason.
Why are work in progress in HBase 2.0 to reduce its dependency on ZooKeeper?
Previously,
for unknown reasons.
The 'Premature EOF from inputStream' log was at INFO level - it may not be
critical.
Please pastebin more of region server log when you reply.
Was there long pause prior to 2017-02-08 11:08:11,878 ?
Thanks
On Thu, Feb 9, 2017 at 5:59 PM, Kang Minwoo wrote:
>
aster around this time ?
Please also check hdfs health.
> On Feb 9, 2017, at 3:44 AM, Kang Minwoo wrote:
>
> The DataNode caused an java.io.IOException: Premature EOF from inputStream
> error.
>
> This error seems to have killed the region server.
>
> One second after this e
snippet of region server log pertaining to the
attempted open of the region.
Thanks
On Tue, Feb 7, 2017 at 7:17 PM, Kang Minwoo wrote:
> Yes. I agree with you.
> But I can not upgrade right away.
>
> The problem is that region servers that have received a particular region
> conti
:04 PM, Kang Minwoo wrote:
> The version I use is very low.
>
> hbase: 0.96.2
> hadoop: 2.4.1
>
> I did not run hbck.
>
> Thanks
>
> 보낸 사람: Ted Yu
> 보낸 날짜: 2017년 2월 7일 화요일 오전 10:40:28
> 받는 사람: user@hbase.apache.org
> 제목: Re:
rom
> regionserver and from active HBase master to help debug the root cause.
>
>
>
> On Mon, Feb 6, 2017 at 5:27 PM Kang Minwoo
> wrote:
>
> > Hello,
> >
> >
> > My region servers die at regular intervals for unknown reasons.
> > I restarted HBase and r
Hello,
My region servers die at regular intervals for unknown reasons.
I restarted HBase and region servers continued to die.
I solved it by eliminating Old WAL.
Now I'm going through the logs and trying to find the cause.
But I do not know where to look.
Please let me know if I need to watch
ction sharing underneath.
FYI
On Fri, Feb 3, 2017 at 10:45 PM, Kang Minwoo
wrote:
> Good morning.
>
>
> I'm using hbase-client 1.2.4 version.
>
> My client environment is multithreaded.
>
> I shared a connection, but I want to connection pooling to improve
> perfo
Good morning.
I'm using hbase-client 1.2.4 version.
My client environment is multithreaded.
I shared a connection, but I want to connection pooling to improve performance.
Is there any good guide or best practice for connection pooling?
Should I simply use Apache Commons Pool?
Let me know
quot;hbase.client.rpc.compressor" option.
My hbase java client works well.
Thanks alot for your comment, It is helpful for debug.
I have a suggestion. If hbase show raw exception (not wrap), it is more helpful
for debug.
Yours sincerely,
Minwoo
____
보낸 사람:
x27;t guarantee compatibility between that client
and the HBase server you're running, so the only way to solve the issue
will probably be to upgrade your client dependencies.
On Tuesday, August 30, 2016, Kang Minwoo wrote:
> Because I have a number of hbase cluster.
>
> They ar
OutOfOrderScannerNextException
Any reason to not use the 1.2.2 client library? You're likely hitting a
compatibility issue.
On Tuesday, August 30, 2016, Kang Minwoo wrote:
> Hi Dima Spivak,
>
>
> Thanks for interesting my problem.
>
>
> Hbase server version is 1.
ion?
On Tuesday, August 30, 2016, Kang Minwoo wrote:
> Hello Hbase users.
>
>
> While I used hbase client libarary in JAVA, I got
> OutOfOrderScannerNextException.
>
> Here is stacktrace.
>
>
> --
>
> java.lang.RuntimeException: org.apache.hadoop.hbase.Do
Hello Hbase users.
While I used hbase client libarary in JAVA, I got
OutOfOrderScannerNextException.
Here is stacktrace.
--
java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException:
Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
org.apach
91 matches
Mail list logo