Luoc was added already.
It's not clear to me if Nestor was also asking for an invitation, but I
sent one to them anyways.
On 7/21/19 8:11 PM, Néstor Boscán wrote:
On Sun, Jul 21, 2019 at 10:33 AM luoc wrote:
Hello,
I am interested to start to make contributions and I want to re
On Sun, Jul 21, 2019 at 10:33 AM luoc wrote:
> Hello,
>
>
> I am interested to start to make contributions and I want to request access
> to the Hbase slack channels for the email address
> luocoo...@qq.com
>
>
> Thank you.
Hello,
I am interested to start to make contributions and I want to request access
to the Hbase slack channels for the email address
luocoo...@qq.com
Thank you.
Evelina,
I've invited you to the slack channel, and your emails went through on both
dev and user list.
On Tue, Apr 16, 2019 at 2:09 PM Evelina Dumitrescu <
evelina.dumitrescu@gmail.com> wrote:
> Hello,
>
> I am interested to start to make contributions and I want to req
Hello,
I am interested to start to make contributions and I want to request access
to the Hbase slack channels for the email address
evelina.dumitrescu@gmail.com.
Thank you,
Evelina
Hello,
I am interested to start to make contributions and I want to request access
to the Hbase slack channels for the email address
evelina.dumitrescu@gmail.com.
I tried to initially send an email on the dev list, but it didn't work.
Thank you,
Evelina
wn in the machine,
>
> then about six regionserver handler be filled, hbase cluster request
> rapidly go down. Until we stop the six regionserver the hbase cluster
> recover.
>
> we guess the hbase numCallsInGeneralQueue overstocke, but we increase
> numCallsInGeneralQue
Dear all,
For help, many many thanks to help。
Some days ago,our production hbase cluster experience a trouble,about Apr
23 21:52, a machine suddenly crash down,the regionserver and datanode also
down in the machine,
then about six regionserver handler be filled, hbase cluster request
rapidly go
Cheyenne O Forbes
>
>
>
> On Mon, May 22, 2017 at 11:13 AM, Ted Yu wrote:
>
> > Currently there is no direct support for calling coprocessor from C++.
> >
> > On Mon, May 22, 2017 at 8:31 AM, Cheyenne Forbes <
> > cheyenne.osanu.for...@gmail.com&
mail.com> wrote:
>
> > Is it possible to make a request to region coprocessor endpoints when
> using
> > C++ as I would with "org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel"
> in
> > Java?
> >
> > Regards,
> >
> > Cheyenne O. Forbes
> >
>
Currently there is no direct support for calling coprocessor from C++.
On Mon, May 22, 2017 at 8:31 AM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> Is it possible to make a request to region coprocessor endpoints when using
> C++ as
Is it possible to make a request to region coprocessor endpoints when using
C++ as I would with "org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel" in
Java?
Regards,
Cheyenne O. Forbes
, Yu Li wrote:
> Hi Steen,
>
> First of all, you need to get some review comments on the patch, normally
> we recommend to post a request on ReviewBoard
> <https://reviews.apache.org/dashboard/> instead of simply attaching the
> patch on JIRA so as to draw committers'
Hi Steen,
First of all, you need to get some review comments on the patch, normally
we recommend to post a request on ReviewBoard
<https://reviews.apache.org/dashboard/> instead of simply attaching the
patch on JIRA so as to draw committers' attention. Make sure to add *hbase*
into rev
Hi list,
I created and attached a patch to HBASE-17817 but I am unsure about
the process from here. I was under the impression that the build
server would pick up newly attached patches and test them, thus
putting it in some sort of pipeline, but I can see that this is
probably not going to happen
om: jeff saremi
> Sent: Friday, March 3, 2017 10:34:00 AM
> To: Hbase-User
> Subject: Re: Need guidance on getting detailed elapsed times in every stage
> of processing a request
>
> Thanks a lot Yu
>
> These are truly the metrics we care about about at this point. It is s
Subject: Re: Need guidance on getting detailed elapsed times in every stage of
processing a request
Thanks a lot Yu
These are truly the metrics we care about about at this point. It is sad to see
that such important metrics were removed from the code.
I will try to apply your patch on my own to
of
processing a request
Hi Jeff,
If the question is simply monitoring HDFS read/write latencies, please
refer to HBASE-15160 <https://issues.apache.org/jira/browse/HBASE-15160>,
there's a patch but not committed yet, and probably cannot apply cleanly on
current code base, but stil
o JFYI.
To get an overview of how quickly the system could respond and what might
be the root cause of the spikes, we only need to monitor the
average/p99/p999 latency of below metrics (stages):
1. totalCallTime: time from request arriving at server to sending response
2. processCallTime: time for
anything would help. thanks
From: saint@gmail.com on behalf of Stack
Sent: Thursday, March 2, 2017 9:53:41 PM
To: Hbase-User
Subject: Re: Need guidance on getting detailed elapsed times in every stage of
processing a request
On Thu, Mar 2, 2017 at 10:26
March 2, 2017 12:45:59 PM
> To: Hbase-User
> Subject: Re: Need guidance on getting detailed elapsed times in every
> stage of processing a request
>
> Thanks so much for the advice! Looking forward to when Tracing gets picked
> up again
>
>
&
rch 2, 2017 12:45:59 PM
To: Hbase-User
Subject: Re: Need guidance on getting detailed elapsed times in every stage of
processing a request
Thanks so much for the advice! Looking forward to when Tracing gets picked up
again
From: saint@gmail.com on behalf of
stage of
processing a request
HBase/HTrace integration once worked but has long since rotted.
Refactorings of internals without proper respect for trace connections is
the main culprit. Updates in htrace and hdfs that need attention
reconnecting spans, etc., is another. On top of this, zipkin project
or in the logs. nothing at all
>
>
> From: jeff saremi
> Sent: Tuesday, February 28, 2017 12:52:32 PM
> To: user@hbase.apache.org
> Subject: Re: Need guidance on getting detailed elapsed times in every
> stage of processing a request
>
> No I
12:52:32 PM
To: user@hbase.apache.org
Subject: Re: Need guidance on getting detailed elapsed times in every stage of
processing a request
No I had not. but it looks like what i needed. Thanks Ted.
I'll see if I have any more questions after reading this.
times in every stage of
processing a request
Have you looked at:
http://hbase.apache.org/book.html#tracing
On Tue, Feb 28, 2017 at 12:37 PM, jeff saremi
wrote:
> I think we need to get detailed information from HBase RegionServer logs
> on how a request (read or write) is processed. S
Have you looked at:
http://hbase.apache.org/book.html#tracing
On Tue, Feb 28, 2017 at 12:37 PM, jeff saremi
wrote:
> I think we need to get detailed information from HBase RegionServer logs
> on how a request (read or write) is processed. Specifically speaking, i
> need to know of s
I think we need to get detailed information from HBase RegionServer logs on how
a request (read or write) is processed. Specifically speaking, i need to know
of say 100 ms time spent in processing a write, how much of it was spent
waiting for the HDFS?
What is the most efficient way of enabling
is increasing always .
Result of it the over all time taken by coprocessor is more few seconds .
Example :
Rpc Request. Time Taken in Seconds
[B.defaultRpcServer.handler=46,queue=4,port=16020] 20
[B.defaultRpcServer.handler=24,queue=0,port=16020] 10
[B.defaultRpcServer.handler=40,queue=4,port=16020]
getting the 60 RPC calls the time taken to
process the last regions is increasing always .
Result of it the over all time taken by coprocessor is more few seconds .
Example :
Rpc Request. Time Taken in Seconds
[B.defaultRpcServer.handler=46,queue=4,port=16020] 20
[B.defaultRpcServer.handler=24,queue
My HBase version is 0.98.17.
My key point is that should we supply one tool which could kill big request
just like mysql kill slow query?
2016-06-15 1:50 GMT+08:00 Esteban Gutierrez :
> Hi Heng,
>
> That sound like some issue from older versions from HBase. Can you please
> give
tly, we found sometimes our RS handlers were occupied by some big
> request. For example, when handlers read one same big block from hdfs
> simultaneously, all handlers will wait, except one handler do read block
> from hdfs and put it in cache, and other handlers will read the block from
Currently, we found sometimes our RS handlers were occupied by some big
request. For example, when handlers read one same big block from hdfs
simultaneously, all handlers will wait, except one handler do read block
from hdfs and put it in cache, and other handlers will read the block from
cache
Please take a look at:
https://blogs.apache.org/hbase/
You can find cases from Imgur, Bloomberg and Cask.
Cheers
On Sat, Jan 9, 2016 at 4:29 AM, Bhuvan Rawal wrote:
> Hi,
>
> I'd be grateful if someone here could direct me to case studies of HBase.
>
> Regards,
> Bhuvan
>
Hi,
I'd be grateful if someone here could direct me to case studies of HBase.
Regards,
Bhuvan
var/lib/hbase/data/WALs/localhost,60020,1445796437179/localhost%2C60020%2C1445796437179.default.1445811009174
> 2015-10-25 22:10:09,189 ERROR [sync.2] wal.FSHLog: Error syncing, request
> close of wal
> java.io.IOException: java.lang.NullPointerException
> at
> org.apache.hadoop.
.1445810981882
with entries=7796, filesize=30.41 MB; new WAL
/var/lib/hbase/data/WALs/localhost,60020,1445796437179/localhost%2C60020%2C1445796437179.default.1445811009174
2015-10-25 22:10:09,189 ERROR [sync.2] wal.FSHLog: Error syncing, request
close of wal
java.io.IOException
I'll take a look tomorrow if no one else has. Sorry about the errors.
On Mon, Aug 10, 2015 at 10:46 AM, Nick Dimiduk wrote:
> Heya,
>
> One of our "advertised features" on 1.1 is a shaded packaging of client
> jars. Initial patches went through but it seems the feature is not standing
> up to mo
Heya,
One of our "advertised features" on 1.1 is a shaded packaging of client
jars. Initial patches went through but it seems the feature is not standing
up to more thorough testing. A number of users have expressed interest in
this packaging, and there are some better test applications for verify
; Hi Andrew,
> >>
> >> Thanks for sharing your thoughts. Sorry for late reply as i recently
> came
> >> back from vacation.
> >> I understand that HBase stores byte arrays, so its hard for HBase to
> >> figure
> >> out the data type.
> >> Wh
,
>>
>> Thanks for sharing your thoughts. Sorry for late reply as i recently came
>> back from vacation.
>> I understand that HBase stores byte arrays, so its hard for HBase to
>> figure
>> out the data type.
>> What if, the client knows that all the
> out the data type.
> What if, the client knows that all the columns in the Rest request are
> Strings. In that case, can we give the option of setting a request header
> "StringDecoding:True". By default, we can assume "StringDecoding: false".
> Just some food for
Hi Andrew,
Thanks for sharing your thoughts. Sorry for late reply as i recently came
back from vacation.
I understand that HBase stores byte arrays, so its hard for HBase to figure
out the data type.
What if, the client knows that all the columns in the Rest request are
Strings. In that case, can
2015 at 2:20 PM, anil gupta wrote:
> Hi All,
>
> We have a String Rowkey. We have String values of cells.
> Still, Stargate returns the data with Base64 encoding due to which a user
> cant read the data. Is there a way to disable Base64 encoding and then Rest
> request would just r
Hi All,
We have a String Rowkey. We have String values of cells.
Still, Stargate returns the data with Base64 encoding due to which a user
cant read the data. Is there a way to disable Base64 encoding and then Rest
request would just return Strings.
--
Thanks & Regards,
Anil Gupta
ts
> with totals of all the hosted regions.
> The totalRequestCount is a mixed bag as Elliott mentioned, but does not
> cover the above two completely.
>
> Jerry
>
>
>
> On Tue, Apr 21, 2015 at 9:13 AM, Elliott Clark wrote:
>
>> Total request count include
, but does not
cover the above two completely.
Jerry
On Tue, Apr 21, 2015 at 9:13 AM, Elliott Clark wrote:
> Total request count includes admin commands like roll wal, close regions,
> open region, compact, and split. So it's possible the number is higher than
> the sum of the
Total request count includes admin commands like roll wal, close regions,
open region, compact, and split. So it's possible the number is higher than
the sum of the two. However I don't know how it could be lower.
On Tue, Apr 21, 2015 at 5:18 AM, mail list wrote:
> Hi, all
>
Hi, all
We are using HBase 0.98.6, and we visit the following site:
localhost:60030/jmx?qry=Hadoop:service=HBase,name=RegionServer,sub=Server
get the following output
"totalRequestCount" : 64637261,
"readRequestCount" : 65021869,
"writeRequestCount" : 45971121,
Why totalRequestCo
: Wednesday, 4 February 2015 12:37 PM
Subject: request
Sir/madam,
I am unable to find hbase jdbc driver.I need a java servlet program to
connect to hbase so that i need to insert data into hbase from my login
page.
Regards,
Sunanda
Sir/madam,
I am unable to find hbase jdbc driver.I need a java servlet program to
connect to hbase so that i need to insert data into hbase from my login
page.
Regards,
Sunanda
Looks great.
You want us to just move over to asciidoctor wholesale Misty for all
branches (0.98+)?
St.Ack
On Fri, Jan 9, 2015 at 5:04 PM, Misty Stanley-Jones <
mstanleyjo...@cloudera.com> wrote:
> See staged site at http://people.apache.org/~misty/book.html. Feedback
> appreciated in the jira.
See staged site at http://people.apache.org/~misty/book.html. Feedback
appreciated in the jira.
On Wednesday, January 7, 2015, Misty Stanley-Jones <
mstanleyjo...@cloudera.com> wrote:
> I just pushed a feature branch for an AsciiDoc POC, called HBASE-11533.
> See the JIRA too. Please check it out
I just pushed a feature branch for an AsciiDoc POC, called HBASE-11533. See
the JIRA too. Please check it out.
You can do mvn clean pre-site, and look at the docs in target/docs. You
will get a failure but that's nothing to do with this. If you run the
following, you should get no failure, and you
; >>
> >> On Thu, Nov 6, 2014 at 5:24 PM, zhiyuan yang
> wrote:
> >>
> >>> Hi,
> >>>
> >>> I'm new to hbase. Several days ago I built an web service with hbase as
> >>> backend.
> >>> However, when I used ab ben
24 PM, zhiyuan yang wrote:
>>
>>> Hi,
>>>
>>> I'm new to hbase. Several days ago I built an web service with hbase as
>>> backend.
>>> However, when I used ab benchmark to test the performance of read only
>>> wordload,
>>> t
>
> Cheers
>
> On Thu, Nov 6, 2014 at 5:24 PM, zhiyuan yang wrote:
>
> > Hi,
> >
> > I'm new to hbase. Several days ago I built an web service with hbase as
> > backend.
> > However, when I used ab benchmark to test the performance of read only
&g
e as
> backend.
> However, when I used ab benchmark to test the performance of read only
> wordload,
> the result was only hundreds request per second even hbase cache hit ration
> was 100%.
>
> The architecture of my system is as following. I use netty as web framework
> and use
&g
Hi,
I'm new to hbase. Several days ago I built an web service with hbase as
backend.
However, when I used ab benchmark to test the performance of read only
wordload,
the result was only hundreds request per second even hbase cache hit ration
was 100%.
The architecture of my system
the block may get loaded to the block cache if it was not already
loaded.
I think it is ok to have this.
Regards
Ram
On Fri, May 23, 2014 at 7:14 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Vinay,
>
> > But regarding my earlier concern, calling get in case
Hi Vinay,
> But regarding my earlier concern, calling get in case of delete request,
in order to check the time stamp, can we not query hbase directly using the
scanner avoiding the CP hooks..??
If you configure a CP to be called, HBase will call it. There isn't really
any way to avoid t
Hi Jean,
I rechecked the logic regarding delete with specifying only 'key', there was a
small mistake in the condition, I have corrected the same and now getting the
CP hook. Thanks for that.
But regarding my earlier concern, calling get in case of delete request, in
order to chec
gt; 2014-05-20 7:08 GMT-04:00 Vinay Kashyap :
>
> Hi Jean,
>>
>> Thanks for your detailed info.
>> Now I understand the trace from where the CP hook is getting called.
>> But with this behavior of the delete request, don't you think there is a
>> kind of rest
> Now I understand the trace from where the CP hook is getting called.
> But with this behavior of the delete request, don't you think there is a
> kind of restriction to the delete request by the client.??
> By restriction I meant, for an application which collects some statistics
Hi Jean,
Thanks for your detailed info.
Now I understand the trace from where the CP hook is getting called.
But with this behavior of the delete request, don't you think there is a kind
of restriction to the delete request by the client.??
By restriction I meant, for an application
GMT-04:00 Vinay Kashyap :
> Hi Jean,
>
> Thanks for your information.
> I am using deleteColumn in my application. I will check the behavior once
> by changing it to use deleteColumns as you suggested.
> But is there any difference in the CP hooks for a delete request.??
> Because, in
Hi Jean,
Thanks for your information.
I am using deleteColumn in my application. I will check the behavior once by
changing it to use deleteColumns as you suggested.
But is there any difference in the CP hooks for a delete request.?? Because, in
my CP I have implemented preDelete() and in
; Dear all,
> > >>
> > >> I am using HBase 0.96.1.1-hadoop2 with CDH-5.0.0.
> > >> I have an application where I have registered a coprocessor to my
> table
> > >> to get few statistics on the read/write/delete requests.
> > >> I have im
t; >> I have an application where I have registered a coprocessor to my table
> >> to get few statistics on the read/write/delete requests.
> >> I have implemented preGetOp, prePut and preDelete accordingly and it is
> >> working as expected in case of read/write
ed preGetOp, prePut and preDelete accordingly and it is
>> working as expected in case of read/write requests.
>> But when I issue a delete request on the table, coprocessor's preGetOp is
>> been called which is varying the read requests statistics.
>> I wanted to understand
ad/write/delete requests.
> I have implemented preGetOp, prePut and preDelete accordingly and it is
> working as expected in case of read/write requests.
> But when I issue a delete request on the table, coprocessor's preGetOp is
> been called which is varying the read requests statistics.
Thanks for your reply Anoop,
In my application, I have only one version of a row. So requests are plain
delete requests without any concern about versions of a record.
>> we already handle this internal get op call not to invoke the pre/post Get
>> CP hook.
I am using 0.96.1.1 versio
g as expected in case of read/write requests.
> But when I issue a delete request on the table, coprocessor's preGetOp is
> been called which is varying the read requests statistics.
> I wanted to understand why is the preGetOp being called when delete
> request is issued.?
>
>
>
> Thanks and regards
> Vinay Kashyap
>
>
>
>> I have an application where I have registered a coprocessor to my table
>> to get few statistics on the read/write/delete requests.
>> I have implemented preGetOp, prePut and preDelete accordingly and it is
>> working as expected in case of read/write requests.
>> But whe
Looking at RegionCoprocessorHost.java, preGetOp() is called in:
public boolean preGet(final Get get, final List results)
Do you have stack trace showing that preGetOp() is called for delete
request ?
Thanks
On Mon, May 12, 2014 at 9:31 PM, Vinay Kashyap wrote:
> Dear all,
>
> I
read/write requests.
But when I issue a delete request on the table, coprocessor's preGetOp is been
called which is varying the read requests statistics.
I wanted to understand why is the preGetOp being called when delete request is
issued.?
Thanks and regards
Vinay Kashyap
read/write requests.
But when I issue a delete request on the table, coprocessor's preGetOp is been
called which is varying the read requests statistics.
I wanted to understand why is the preGetOp being called when delete request is
issued.?
Thanks and regards
Vinay Kashyap
hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java#L1440
> > > > >
> > > > > One such usage is here,
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
>
> >
> > > >
> > >
> >
> https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/RatioBasedCompactionPolicy.java#L292
> > > >
> > > > It gets the sizes of all the store files, I
ioBasedCompactionPolicy.java#L292
> > >
> > > It gets the sizes of all the store files, I believe thats what you are
> > > asking! Please correct me if I'm wrong,
> > >
> > >
> > > On Tue, Feb 25, 2014 at 12:37 PM, Upendra Yadav >
> > It gets the sizes of all the store files, I believe thats what you are
> > asking! Please correct me if I'm wrong,
> >
> >
> > On Tue, Feb 25, 2014 at 12:37 PM, Upendra Yadav >wrote:
> >
> >> Is hbase call delete request for old hfile(storeF
will hbase call for delete old hfiles on hdfs? Since on minor/major
compaction that files already merged and after merging generate new hfile.
So old hfile will never be use.
On Tue, Feb 25, 2014 at 12:37 PM, Upendra Yadav wrote:
> Is hbase call delete request for old hfile(storeFile) file
ain/java/org/apache/hadoop/hbase/regionserver/compactions/RatioBasedCompactionPolicy.java#L292
>
> It gets the sizes of all the store files, I believe thats what you are
> asking! Please correct me if I'm wrong,
>
>
> On Tue, Feb 25, 2014 at 12:37 PM, Upendra Yadav wrote:
>
>&
Is hbase call delete request for old hfile(storeFile) file if minor/major
> compaction happen for them and generate new hfile and store it in hdfs.
>
> for minor compaction how it will get size of every hfile. lets we have 20
> no. of hfiles for a CF and minor compaction wants to perform
Is hbase call delete request for old hfile(storeFile) file if minor/major
compaction happen for them and generate new hfile and store it in hdfs.
for minor compaction how it will get size of every hfile. lets we have 20
no. of hfiles for a CF and minor compaction wants to perform. Will it
s Takeshi,
> St.Ack
>
>
> On Thu, Jan 16, 2014 at 9:57 PM, takeshi wrote:
>
> > Hi All,
> >
> > Recently we got the error msg "Request is a replay (34) - PROCESS_TGS"
> > while we are using the HBase client API to put data into HBase-0.94.16
> with
&g
Lets change it in both places. Please file issues. Lets try minimize the
freakout incidents running your hbase/hadoop cluster.
Thanks Takeshi,
St.Ack
On Thu, Jan 16, 2014 at 9:57 PM, takeshi wrote:
> Hi All,
>
> Recently we got the error msg "Request is a replay (34) - PROCES
Hi All,
Recently we got the error msg "Request is a replay (34) - PROCESS_TGS"
while we are using the HBase client API to put data into HBase-0.94.16 with
krb5-1.6.1 enabled. The related msg as follows...
{code}
[2014-01-15
09:40:38,452][hbase-tablepool-1-threa
there's an error you might have partial
results. You best option is to retry again.
-- Lars
From: S. Zhou
To: "user@hbase.apache.org"
Sent: Tuesday, December 31, 2013 11:17 AM
Subject: how to get the failed rows when executing a batch PUT reques
I checked the Java doc on "put(List puts)" of HTableInterface and it does
not say how to get the failed rows in case exception happened (see below): can
I assume the failed rows are contained in "puts" list?
Throws:
InterruptedIOException
RetriesExhaustedWithDetailsException
Compared to the Jav
>From Merge#run():
HBaseAdmin.checkHBaseAvailable(getConf());
LOG.fatal("HBase cluster must be off-line.");
Cheers
On Wed, Nov 20, 2013 at 9:14 AM, Asaf Mesika wrote:
> Thanks! Offline as in table disabled or cluster shutdown?
>
> On Wednesday, November 20, 2013, Tom Brown wrote:
Thanks! Offline as in table disabled or cluster shutdown?
On Wednesday, November 20, 2013, Tom Brown wrote:
> The trade-off we make is to increase our write performance knowing it will
> negatively impact our read performance. In our case, however, we write a
> lot of rows that might never be rea
The trade-off we make is to increase our write performance knowing it will
negatively impact our read performance. In our case, however, we write a
lot of rows that might never be read (depending on the specific deep-dive
queries that will be run), so it's an ok trade-off. However, our layout is
si
Can you please elaborate?
On Wednesday, November 20, 2013, Otis Gospodnetic wrote:
> We use https://github.com/sematext/HBaseWD and I just learned
> Amazon.com people are using it and are happy with it, so it may work
> for you, too.
>
> Otis
> --
> Performance Monitoring * Log Analytics * Search
We use https://github.com/sematext/HBaseWD and I just learned
Amazon.com people are using it and are happy with it, so it may work
for you, too.
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Wed, Nov 20, 2013 at 1:00 AM,
Thanks for clearing that out.
I'm using your message to ping anyone who assist as to it appears the use
case should happen to a lot of people?
Thanks!
On Wednesday, November 20, 2013, Himanshu Vashishtha wrote:
> Re: "The 32 limit makes HBase go into
> stress mode, and dump all involving regions
> The second bad part of this rowkey design is that some customer
> > will
> > > > have
> > > > > > significantly less traffic than other customers, thus in essence
> > > their
> > > > > > regions will get written in a very slow
t; > > > > customer. When this happens on the same RS - bam: the slow region
> > Puts
> > > > are
> > > > > causing the WAL Queue to get bigger over time, since its region
> never
> > > > gets
> > > &g
and then regions are flushed forcely. When this happen, we get about
> > 100
> > > > regions with 3k-3mb store files. You can imagine what happens next.
> > > >
> > > > The weirdest thing here is that this rowkey design is very common -
> > > not
ays
> > > > in the 1st WAL file. Until when? Until we hit max logs file permitted
> > > (32)
> > > > and then regions are flushed forcely. When this happen, we get about
> > 100
> > > > regions with 3k-3mb store files. You can imagine what happens next
1 - 100 of 180 matches
Mail list logo