Also check what is your heap size of RS?
When you say hTable.put(), how many such threads are there?
What is your region size? Is your regions splitting continuously do to
heavy load?
Regards
Ram
-Original Message-
From: yuzhih...@gmail.com [mailto:yuzhih...@gmail.com]
Sent:
, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Is it a
good idea to create Htable instance on B and do put in my mapper?
I
might
try this idea.
Yes you can do this.. May be the same mapper you can do a put for
table
B. This was how we have tried loading data to another table
, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
AFAIK, RPC cannot be avoided even if Region A and Region B are on
same
RS
since these two regions are from different table. Am i right?
No... suppose your Region A and Region B of different tables are
collocated
,
Anil Gupta
On Wed, Oct 24, 2012 at 10:16 PM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Just out of curiosity,
The secondary index is stored in table B as rowkey B --
family:rowkey
A
what is rowkey B here?
1. Scan the secondary table by using prefix filter
Just out of curiosity,
The secondary index is stored in table B as rowkey B --
family:rowkey
A
what is rowkey B here?
1. Scan the secondary table by using prefix filter and startRow.
How is the startRow determined for every query ?
Regards
Ram
-Original Message-
From: Anoop Sam
., 오후 6:07, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com 작성:
What is your need? You want to scan the rows i.e data in the table?
Or you want to start and endkeys. Actually for single region just
empty
bytes represent the start and endkey. So am not getting what you
want from
Can you tell me how the splitkeys are formed when the table was created?
Or there are no splits at all for your table? If there are no splits then
you will get empty start and endkey.
Regards
Ram
-Original Message-
From: Henry JunYoung KIM [mailto:henry.jy...@gmail.com]
Sent:
a start-key and and end-key, I need to use just a
scanner. right?
2012. 10. 19., 오후 5:51, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com 작성:
Can you tell me how the splitkeys are formed when the table was
created?
Or there are no splits at all for your table? If there are no splits
to look for ??
Thanks
Kiran
On Thu, Oct 18, 2012 at 11:55 AM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
HBASE-6033 does the work that you ask for. It is currently in Trunk
version
of HBase.
Regards
Ram
-Original Message-
From: kiran
A simple test would be to just right some 10 rows
I meant to say write some 10 rows.(not right)
Regards
Ram
-Original Message-
From: Ramkrishna.S.Vasudevan [mailto:ramkrishna.vasude...@huawei.com]
Sent: Thursday, October 18, 2012 2:05 PM
To: user@hbase.apache.org
Subject: RE
For 1, I knew the cluster began to split log and recover the data on
the
crashed RegionServer, will the recovery operation block all the
requests
from the client side?
Ideally should not. But if your client was generating data for the regions
that were dead at that time then client
Hi Yousuf
The client caches the locations of the regionservers, so after a couple
of
minutes of the experiment running, it wouldn't need to re-visit
ZooKeeper,
I believe. Correct me if I am wrong please.
Yes you are right.
Regards
Ram
-Original Message-
From: Yousuf Ahmad
(tableName)).hasCoprocessor(clas
sName)
)
{
System.err.println(YIPIE!!!);
}
hAdmin.enableTable(tableName);
}
hAdmin.close();
}
Thanks,
Anil Gupta
On Wed, Oct 17, 2012 at 9:27 PM, Ramkrishna.S.Vasudevan
/IOException.html?
is-external=true
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdm
in.html#modifyTable%28byte[],%20org.apache.hadoop.hbase.HTableDescripto
r%29
Thanks,
Anil Gupta
On Thu, Oct 18, 2012 at 10:23 PM, Ramkrishna.S.Vasudevan
ramkrishna.vasude
to add the
coprocessor while creating the table. That's why there was a confusion
between us.
On Thu, Oct 18, 2012 at 10:40 PM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Yes you are right. modifyTable has to be called.
public class TestClass {
private static
Thanks Jean for your update.
Regards
Ram
-Original Message-
From: Jean-Marc Spaggiari [mailto:jean-m...@spaggiari.org]
Sent: Wednesday, October 17, 2012 5:14 PM
To: user@hbase.apache.org
Subject: Re: ANN: HBase 0.94.2 is available for download
Thanks. I tried to call some MR
Hi Yun
Logically deleting a KeyValue data in hbase is performed
by
marking tombmarker (by Delete() per records) or setting TTL/max_version
(per Store). After these actions, however, the physical data are still
there, somewhere in the system. Physically deleting a record in hbase
is
realised
Also to see the code how the delete happens pls refer to StoreScanner.java
and how the ScanQueryMatcher.match() works.
That is where we decide if any kv has to be avoided due to already deleted
tombstone marker.
Forgot to tell you about this.
Regards
Ram
-Original Message-
From:
Can you try like start any of the regionservers that are not connecting at
all. May be start 2 of them.
Observer master logs. See whether it says
'Waiting for RegionServers to checkin'?.
Just to confirm your ZK ip and port is correct thro out the cluster? If
multitenant cluster then you may
checked the example code
in
hbase 6496 link https://issues.apache.org/jira/browse/HBASE-6496.
which
show how to delete data before time as in a on-demand specification...
Cheers,
Yun
On Wed, Oct 17, 2012 at 8:46 AM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Also
at 9:12 AM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Can you try like start any of the regionservers that are not
connecting at
all. May be start 2 of them.
Observer master logs. See whether it says
'Waiting for RegionServers to checkin'?.
Just to confirm your ZK
, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
I tried out a sample test class. It is working properly. I just
have a
doubt whether you are doing the
Htd.addCoprocessor() step before creating the table? Try that way
hope it
should work.
Regards
Ram
Which version of HBase?
The below logs that you have attached says about a different table right '
deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.'
And the one you are trying to drop is ' ivytest_deu’
Regards
Ram
-Original Message-
From: 唐 颖
,Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com 写道:
Which version of HBase?
The below logs that you have attached says about a different table
right '
deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.'
And the one you are trying to drop is ' ivytest_deu’
Regards
Ram
that the HTableDescriptor file got deleted but the META
is
having the entry.
Thanks!
在 2012-10-16,下午4:52,Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com 写道:
What does the 'list' command show? Does it say the table exists or
not?
What I can infer here
Hi Yun Peng
You want to know the creation time? I could see the getModificationTime()
api. Internally it is used to get a store file with minimum timestamp.
I have not tried it out. Let me know if it solves your purpose.
Just try it out.
Regards
Ram
-Original Message-
From: yun peng
AM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Hi Anil
We also do a lot of stuff with coprocessors MasterObservers,
RegionObservers
and WALObservers.
Just start your master and RS in debug mode and connect remotely
from
eclipse. This should
I tried out a sample test class. It is working properly. I just have a
doubt whether you are doing the
Htd.addCoprocessor() step before creating the table? Try that way hope it
should work.
Regards
Ram
-Original Message-
From: anil gupta [mailto:anilgupt...@gmail.com]
Sent:
and regionserver died
We will check the zk log.
On Monday, October 15, 2012, Ramkrishna.S.Vasudevan wrote:
Check your GC configurations. Seems to that a Full GC has happened
and the
Zookeeper thought that to be session expiry.
Regards
Ram
-Original Message-
From: Xiang
Hi Suresh
I would like to use startRow() and stopRow() on a scan, but these
operations
To set the start and stopRow you need to know the rowkey.
between column values DEBUG:x and y. How can I force scan
to
return all these rows?
Have you made setFilterRowIfMissing(true). By default
Check your GC configurations. Seems to that a Full GC has happened and the
Zookeeper thought that to be session expiry.
Regards
Ram
-Original Message-
From: Xiang Hua [mailto:bea...@gmail.com]
Sent: Saturday, October 13, 2012 6:20 PM
To: user@hbase.apache.org
Subject: hmaster and
Also just see the discussions over the JIRA which will help you to come out
with more specific usecases that you want to implement.
The example over there will surely help you out.
Regards
Ram
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Sunday, October 14, 2012
Hi Anil
We also do a lot of stuff with coprocessors MasterObservers, RegionObservers
and WALObservers.
Just start your master and RS in debug mode and connect remotely from
eclipse. This should be fine. Whenever the code goes to the RegionObserver
or any observers automatically you will be able
Here oldValue anyway will have the same rowkey right. So it should be in the
same region but as older version.
Regards
Ram
-Original Message-
From: Jean-Marc Spaggiari [mailto:jean-m...@spaggiari.org]
Sent: Wednesday, October 10, 2012 5:24 PM
To: user@hbase.apache.org
Subject: Re:
Hi
The configurations in HBase are related to region server and based on the
various operations.
Like region flushes, region split, how compactions gets triggered and HFile
block caches. The performance of all the operations collectively will give
you a better overall performance.
Regards
Ram
Are you planning to use region splits? Can the rowkey have the deptno?
Having dept no in another table, may be having a reverse mapping of deptno
to empno may be helpful too if such queries are frequent.
Regards
Ram
-Original Message-
From: Anoop Sam John [mailto:anoo...@huawei.com]
Oops!! Anoop has just replied the same similar to mine :)
Regards
Ram
-Original Message-
From: Anoop Sam John [mailto:anoo...@huawei.com]
Sent: Thursday, October 11, 2012 10:40 AM
To: user@hbase.apache.org
Subject: RE: Temporal in Hbase?
Hi Shumin,
start_time const_et and
If your Column doesnot contain the given value means
If the end_time qualifier is null still the row should be retrieved right?
As far as I read what is temporal database (am not very much familiar. Just
read thro WIKI to know what is temporal) it is related to multiversioning of
the same row.
Reply inline
-Original Message-
From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
Sent: Tuesday, October 09, 2012 10:10 AM
To: user@hbase.apache.org
Subject: HBase client
There is a suggestion on this URL
http://hbase.apache.org/book/perf.writing.html#perf.hbase.client.autofl
Seems to be a bug to me. Can you file a JIRA on this?
Regards
Ram
-Original Message-
From: Andrew Olson [mailto:noslower...@gmail.com]
Sent: Friday, October 05, 2012 2:04 AM
To: user@hbase.apache.org
Subject: Issue with column-counting filters accepting multiple versions
of a
Which version of HBASE are you using?
As part of HBASE-5564 a feature was introduced to handle duplicate records
in bulk load using timestamp also to be specified in the file like how we
specify the column family and table name.
If you can backport it to your version hope it will be helpful.
You can use mapreduce. We have an utility called ImportTsv tool that allows
you to bulk load data from a flat file? Is this your use case?
Pls refer to http://hbase.apache.org/book.html#arch.bulk.load
Regards
Ram
-Original Message-
From: Venkateswara Rao Dokku
Hi
That is not needed, infact it has been fixed in the latest trunk version as
part of HBASE-6327.
We can back port the issue I feel. Thanks for bringing this into notice.
Regards
Ram
-Original Message-
From: jlei liu [mailto:liulei...@gmail.com]
Sent: Thursday, September 27, 2012
/SESSIONID_TIMELINE..Apologies
for
the
typo. For rest of the things, I feel Ramkrishna sir has provided
a good
and
proper explanation. Please let us know if you still have any
doubt or
question.
Ramkrishna.S.Vasudevan : You are welcome sir. It's my pleasure to
share
space
explain it further? Thanks in advance.
Best Wishes
Dan Han
On Wed, Sep 26, 2012 at 10:49 PM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Just trying out here,
Is it possible for you to collocate the region of the 1st schema and
the
region of the 2nd schema so
For the NPE that you got, is the same HTable instance shared by different
threads. This is a common problem users encounter while using HTable across
multiple threads.
Pls check and ensure that the HTable is not shared.
Regards
Ram
-Original Message-
From: Naveen
Hi Mohith
First of all thanks to Tariq for his replies.
Just to add on,
Basically HBase uses the Zookeeper to know the status of the cluster like
the no of tables enabled, disabled and deleted.
Enabled and deleted states are handled bit different in the 0.94 version.
ZK is used for various
Just trying out here,
Is it possible for you to collocate the region of the 1st schema and the
region of the 2nd schema so that overall the total query execution happens
on single RS and there is not much
IO.
Also when you go with coprocessor on a collocated regions, the caching and
rpc timeout
Can you check your zookeeper data?
Try clearing the zk data and restart the cluster.
Regards
Ram
-Original Message-
From: iwannaplay games [mailto:funnlearnfork...@gmail.com]
Sent: Tuesday, September 25, 2012 11:29 AM
To: user
Subject: Hregionserver instance runs endlessly
Hi,
: Hregionserver instance runs endlessly
how to clear zk data.Sry i am new to hbase
On Tue, Sep 25, 2012 at 11:39 AM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Can you check your zookeeper data?
Try clearing the zk data and restart the cluster.
Regards
Ram
Hi Pankaj
If your threads are generating data (0..9, 10...19,
20...29, ...) of this format, your splits also should be like
0.1
1...2
2...3 and so on right? May be am missing something here.
But the data generation that creates the rowkey and the pre
Hi Dan
Generally if the region distribution is not done properly as per the need
then always we end up in region server getting overloaded due to region
hotspotting.
Write thro put can go down. It is not like the coprocessor performance
alone is slow.
Please check if the regions are properly
For deletion I think we need to first delete on the main table, get the rowkeys
that got deleted and apply them on the index table i.e form the deletes.
Here we may have to take care of different deletes like delete, deleteColumn
and deleteColumnWithVersions.
Grouping of the deletes based on
Reply Inline
-Original Message-
From: Monish r [mailto:monishs...@gmail.com]
Sent: Sunday, September 23, 2012 7:29 PM
To: user@hbase.apache.org
Subject: Clarification regarding major compaction logic
Hi guys,
i would like to clarify the following regarding Major Compaction
Raised HBASE-6853 to address this issue.
Regards
Ram
-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of
Stack
Sent: Friday, September 21, 2012 10:07 AM
To: user@hbase.apache.org
Subject: Re: IllegalArgumentException when trying to split an empty
Hi John
We have encountered this problem while developing one of the features.
I suggest we can raise an issue in JIRA.
The problem here is there is a thread pool executor that tries to split the
store files and it takes the no of store files as the no of threads needed
for the executor.
So for
Hi
Reply inline
Regards
Ram
-Original Message-
From: Ramasubramanian [mailto:ramasubramanian.naraya...@gmail.com]
Sent: Tuesday, September 18, 2012 1:46 PM
To: user@hbase.apache.org
Subject: About hbase metadata
Hi,
1. Where can I see the metadata of hbase?
[Ram] What do you
Hi Willy
Yes I agree that META/ROOT recovery should happen as fast as possible.
Which version of HBase are you using?
Lot of fixes have gone into the latest versions regarding the recovery part.
You can take a look at HBASE-6713 also if you are using any of the latest
versions.
If you can
://hbase.apache.org/book/perf.writing.html?
Thanks,
Jing Wang
2012/9/5 Ramkrishna.S.Vasudevan ramkrishna.vasude...@huawei.com
Hi JingWang
It is not necessary that region split can cause GC problems. Based
on your
use case we may need to configure heapspace for the RS.
Coming back to region
You can use the property hbase.hregion.max.filesize. You can set this to a
higher value and control the splits through your application.
Regards
Ram
-Original Message-
From: jing wang [mailto:happygodwithw...@gmail.com]
Sent: Wednesday, September 05, 2012 3:48 PM
To:
. When
splitted, hbase may lead gc to 'stop the world' or some long time full
gc.
Our application can't accpet this.
Thanks,
Jing Wang
2012/9/5 Ramkrishna.S.Vasudevan ramkrishna.vasude...@huawei.com
You can use the property hbase.hregion.max.filesize. You can set
this to a
higher value
Thanks Stack for giving a pointer to this. Yes it does seems this property
is very important.
Regards
Ram
-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of
Stack
Sent: Friday, August 31, 2012 3:55 AM
To: user@hbase.apache.org
Subject: Re:
@Lars/@Gary
Do we need to document such things. Recently someone was asking me a
question like this, if my endpoint impl is so memory intensive it just
affects a running cluster and already the RS
has a huge memory heap associated with it.
So its better we document saying if your endpoint
Hi Jay
Am not pretty much clear on exactly what is the problem because I am not
able to find much difference. How you are checking the time taken?
When there are multiple scanner going parallely then there is a chance for
the client to be a bottle neck as it may not be able to handle so many
Hi David
The first approach should be better. If you know what are the columns that
you will always be retrieving, you can also use scan.addColumn() which is
much better. May be you would have tried this already.
Regards
Ram
-Original Message-
From: David Koch
Hi
Just to add on, The HLog is just an edit log. Any transaction updates(
Puts/Deletes) are just added to HLog. It is the Scanner that takes care of
the TTL part which is calculated from the TTL configured at the column
family(Store) level.
Regards
Ram
-Original Message-
From:
hbase.master.loadbalance.bytable - By default it is true.
Regards
Ram
-Original Message-
From: Anoop Sam John [mailto:anoo...@huawei.com]
Sent: Thursday, August 02, 2012 3:32 PM
To: user@hbase.apache.org
Subject: RE: Region balancing question
Seems this is available from 0.92
+1. Anyway all mutations extends OperationsWithAttributes also.
Regards
Ram
-Original Message-
From: Anoop Sam John [mailto:anoo...@huawei.com]
Sent: Thursday, August 02, 2012 10:13 AM
To: user@hbase.apache.org
Subject: RE: Retrieve Put timestamp
Currently in Append there is a
We work on hadoop 2.0.
Regards
Ram
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Thursday, July 19, 2012 1:42 AM
To: d...@hbase.apache.org
Cc: user@hbase.apache.org
Subject: Re: [poll] Does anyone run or test against hadoop 0.21, 0.22,
0.23 under HBase
From the logs I can see that though the server's are same their start code
is different.
Need to analyse the previous logs also. Pls file a JIRA, if possible attach
the logs to that.
Thanks Howard.
Regards
Ram
-Original Message-
From: Howard [mailto:rj03...@gmail.com]
Sent:
Hi
What type of rowkeys are you specifying? HBase does Byte comparison.
So the split that happens is correct.
1012 and 10112 will fall in the same region whereas 201 will come in
the next region.
It depends on how you form the row key.
Regards
Ram
-Original Message-
From: Ben
Hi
You can also check the cache hit and cache miss statistics that appears on
the UI?
In your random scan how many Regions are scanned whereas in gets may be many
due to randomness.
Regards
Ram
-Original Message-
From: N Keywal [mailto:nkey...@gmail.com]
Sent: Thursday, June 28,
I don't have a sample code. But it can be done using Coprocessors because
it provides lot of hooks.
HBASE-2038 will give you pointers towards that and before that please read
about coprocessors also. This is just one way of doing it.
Regards
Ram
-Original Message-
From: Harsh Gupta
++)
{
if (results[i] instanceof KeyValue)
if (!((KeyValue)results[i]).isEmptyColumn())
System.out.println(Result[ + i + ]: +
results[i]); // co
BatchExample-9-Dump Print all results.
}
2012/6/28, Ramkrishna.S.Vasudevan ramkrishna.vasude...@huawei.com:
Hi
You
Hi
Can you confirm that both the test runs created the same number of HFiles?
Regards
Ram
-Original Message-
From: Prakrati Agrawal [mailto:prakrati.agra...@mu-sigma.com]
Sent: Monday, June 25, 2012 12:05 PM
To: user@hbase.apache.org
Subject: RE: Enabling caching increasing the
Hi Fredric
hbase.store.delete.expired.storefile - Set this property to true.
This property helps you to delete the store files before compaction. If you
are interested you can check HBASE-5199.
It is available in 0.94 and above. Hope this helps.
Regards
Ram
-Original Message-
Hi
What about the region size and how frequently flush is happening? Are you
using default configurations only?
Regards
Ram
From: Giorgi Jvaridze [mailto:giorgi.jvari...@gmail.com]
Sent: Tuesday, June 19, 2012 7:26 PM
To: user@hbase.apache.org
Subject: Re: Decreasing write speed
And is your regionserver getting hotspotted? Like all your requests are
targeted to one particular RegionServer or to one particular region alone?
Regards
Ram
From: Giorgi Jvaridze [mailto:giorgi.jvari...@gmail.com]
Sent: Tuesday, June 19, 2012 7:26 PM
To: user@hbase.apache.org
Subject:
Hi Benjamin
Can you post some logs. Can you see if you get any msg in logs of master
saying regions are in transition or something.
Regards
Ram
-Original Message-
From: Ben Kim [mailto:benkimkim...@gmail.com]
Sent: Thursday, June 14, 2012 6:00 PM
To: user@hbase.apache.org
Subject:
Hi Pradeep,
Many changes have happened from the version that you are specifying upto the
recent version. So may be you can try out latest versions.
Regards
Ram
-Original Message-
From: Pradeep Gopaluni [mailto:pradeep.gopal...@gmail.com]
Sent: Tuesday, June 12, 2012 4:35 PM
To:
in all three Datanodes.
Regards
Ram
Ramkrishna.S.Vasudevan wrote:
Hi
Region Server is not DataNode.
DataNodes are part of HDFS. RegionServers are part of HBase. HBase
uses
HDFS to store data and in the process of storing data DataNodes are
used
by
HDFS. DataNodes
You have any logs corresponding to this?
Regards
Ram
-Original Message-
From: NNever [mailto:nnever...@gmail.com]
Sent: Wednesday, June 06, 2012 2:12 PM
To: user@hbase.apache.org
Subject: Region autoSplit when not reach 'hbase.hregion.max.filesize' ?
The
...@gmail.com
We currently run in INFO mode.
It actully did the split, but I cannot find any logs about
this
split.
I will change the log4j to DEBUG, if got any log valuable,
I will
paste
here...
Thanks Ram,
NN
2012/6/6 Ramkrishna.S.Vasudevan
ramkrishna.vasude
Hi NN
HBASE-6132 has been raised for the same. There are few issues when you use
these filters with filterlist.
Regards
Ram
-Original Message-
From: NNever [mailto:nnever...@gmail.com]
Sent: Monday, June 04, 2012 12:29 PM
To: user@hbase.apache.org
Subject: Re:
Just to add on.
The java doc clearly says in FamilyFilter that
* If an already known column family is looked for, use {@link
org.apache.hadoop.hbase.client.Get#addFamily(byte[])}
* directly rather than a filter.
So addFamily should be better.
Regards
Ram
-Original Message-
From:
To answer this question
Alternatively, is there a way to trigger an increment in another table (say
count) whenever a row was added to user?
You can try to use Coprocessors here. Like once a put is done to the table
'user' using the coprocessor hooks you can trigger an Increment() operation
on
Discussing with Anoop I think PageFilter also may not work when used with
FilterList. Need to check this.
Please file a JIRA for the same we can look into all possibilities.
Regards
Ram
-Original Message-
From: Anoop Sam John [mailto:anoo...@huawei.com]
Sent: Tuesday, May 29, 2012
Hi Shrijeet
Regarding your last question about the region growing bigger
The following points could be one reason
When you said your compactions are slower and also you were trying to split
some very big store files, every split would have created some set of
reference files.
By the time as more
files will
grow, I am not able to understand why size of individual store files
keep
growing.
Lastly what did you do with your 400 GB region? Any work around ?
-Shrijeet
On Sun, May 13, 2012 at 9:29 PM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Hi Shrijeet
May be after half an hour the Timeout monitor may try to assign it. It is an
internal thread that the system uses.
But I still doubt the zookeeper data. This problem mainly happens if the
zookeeper node for META is still available.
Regards
Ram
-Original Message-
From: Srikanth P.
Hi Lee
Which version of HBase are you using?
Regards
Ram
-Original Message-
From: Eason Lee [mailto:softse@gmail.com]
Sent: Thursday, April 19, 2012 9:36 AM
To: user@hbase.apache.org
Subject: HBaseAdmin needs a close methord
Resently, my app meets a problem list as follows
Hi Lee
Is HBASE-5073 resolved in that release?
Regards
Ram
-Original Message-
From: Eason Lee [mailto:softse@gmail.com]
Sent: Thursday, April 19, 2012 10:40 AM
To: user@hbase.apache.org
Subject: Re: HBaseAdmin needs a close methord
I am using cloudera's cdh3u3
Hi Lee
Hi Juhani
Can you tell more on how the regions are balanced?
Are you overloading only specific region server alone?
Regards
Ram
-Original Message-
From: Juhani Connolly [mailto:juha...@gmail.com]
Sent: Monday, March 19, 2012 4:11 PM
To: user@hbase.apache.org
Subject: 0.92 and
for responses. these were all run with 400
threads(though we tried more/less just in case)
2012/03/19 20:57 Mingjian Deng koven2...@gmail.com:
@Juhani:
How many clients did you test? Maybe the bottleneck was client?
2012/3/19 Ramkrishna.S.Vasudevan ramkrishna.vasude...@huawei.com
Hi
HBASE-5225 was merged for different purpose. But this issue does not seem
to be because of HBASE-5225.
Because in HBASE-5225 we tried to fix the case where the seq id is missed.
Here it is not missed it is still there in lastSeqWritten without getting
flushed.
Regards
Ram
-Original
Hi All
YeeeahGreat job to every one who had contributed to the release.
!!!
An applause to the whole of HBASE community for bringing out this release...
Regards
Ram
-Original Message-
From: Gaojinchao [mailto:gaojinc...@huawei.com]
Sent: Tuesday, January 24, 2012 9:06
Hi Sriram
What is the problem that you are getting ? Any exceptions in logs?
Ideally the master will reallocate all the regions belonging to the region
server that
Went down.
Regards
Ram
-Original Message-
From: V_sriram [mailto:vsrira...@gmail.com]
Sent: Friday, January 13, 2012
97 matches
Mail list logo