Hi Desert,
I try your code on 0.94.0, it works fine. Is there any detail you not list
above or maybe there exist some JIRAs fixed this bug for 0.92.2?
Can someone get this test run on 0.92.2 and give the result?
Regards,
NN
2012/6/11 Desert R. desert_rose_...@hotmail.com
Sorry about my
https://issues.apache.org/jira/browse/HBASE-4951
Tks Ram and St.Ack.
Regards,
NN
2012/6/7 Stack st...@duboce.net
On Wed, Jun 6, 2012 at 7:55 PM, NNever nnever...@gmail.com wrote:
On 0.94.0, In class RegionSplitPolicy, I saw you
use IncreasingToUpperBoundRegionSplitPolicy
I'm sorry, it's https://issues.apache.org/jira/browse/HBASE-6185
Yours,
NN
2012/6/7 Stack st...@duboce.net
On Wed, Jun 6, 2012 at 7:55 PM, NNever nnever...@gmail.com wrote:
On 0.94.0, In class RegionSplitPolicy, I saw you
use IncreasingToUpperBoundRegionSplitPolicy
seems no any function there.
If someone else had use RowFiler on get before may help you on this
question.
Yours,
NN
2012/6/6 Em mailformailingli...@yahoo.de
Hi NN,
answers are inline.
Am 06.06.2012 03:37, schrieb NNever:
Am I able to do this with one scan?
No, I think (Unless you
*1.Coprocessor on all tables:*
When coprocessor on all tables, config hbase-site.xml:
property
namehbase.coprocessor.region.classes/name
valuecp classPath/value
/property
and config the hbase-env.sh
# Extra Java CLASSPATH elements. Optional.
export
If I create 2 puts for the exact same address (r/f/q/t), the last one
wins?
later one wins. A simple test tells everything.
Can I get different results from a scan as from a get? (What if I specify
max versions = 1?)
Get is finally changed to a Scan, too.
I am currently using HBase 0.92.1,
The 'hbase.hregion.max.filesize' are set to 100G (The recommed value to act
as auto-split turn off). And there is a table, we keep put datas into it.
When the storefileUncompressedSizeMB reached about 1Gb, the region auto
splite to 2.
I don't know how it happened? 1G is far more less than
to this?
Regards
Ram
-Original Message-
From: NNever [mailto:nnever...@gmail.com]
Sent: Wednesday, June 06, 2012 2:12 PM
To: user@hbase.apache.org
Subject: Region autoSplit when not reach 'hbase.hregion.max.filesize' ?
The 'hbase.hregion.max.filesize' are set to 100G
I'm sorry, the log4j now is WARN, not INFO
2012/6/6 NNever nnever...@gmail.com
We currently run in INFO mode.
It actully did the split, but I cannot find any logs about this split.
I will change the log4j to DEBUG, if got any log valuable, I will paste
here...
Thanks Ram,
NN
2012/6/6
I'll. I changed the log level.
Putting datas and waiting for the strange split now :).
Yours,
NN
2012/6/6 dong.yajun dongt...@gmail.com
Hi NNever
If you find any issues, please let us known, thanks.
On Wed, Jun 6, 2012 at 5:09 PM, NNever nnever...@gmail.com wrote:
I'm sorry
of 'org.apache.hadoop.hbase.NotServingRegionException' when split. No
log about start doing split and why do split.
logs are too large to upload somewhere.
I'll dig into itIt really confuse me...
Thanks, yours
NN
2012/6/6 NNever nnever...@gmail.com
I'll. I changed the log level.
Putting datas
I'm sorry i made a mistack. my protobuf's version is 2.4.0.a.jar. I got it
from Hbase0.94's lib.
2012/6/6 Amit Sela am...@infolinks.com
you mean protobuf-java-2.4.04.jar or there is a new version like you
wrote protobuf-java-2.4.9.a.jar ?
On Mon, Jun 4, 2012 at 6:09 AM, NNever nnever
... Also, can you recheck your
math ?
Sent from my iPhone
On Jun 6, 2012, at 6:17 PM, NNever nnever...@gmail.com wrote:
It comes again. I truncate the table, and put about 10million datas into
it
last night.
The table auto-split to 4, each has about 3Gb storefileUncompressedSize.
I
namehbase.regionserver.handler.count/name
value4000/value
/property
property
namehbase.client.write.buffer/name
value1048576/value
/property
property
namehbase.client.scanner.caching/name
value10/value
/property
/configuration
2012/6/7 NNever nnever...@gmail.com
The rowkey is just like an UUID
;
FileStructIndex,,1339032525500.7b229abcd0785408251a579e9bdf49c8. is closing
2012-06-07 10:30:52,411 DEBUG
org.apache.hadoop.hbase.regionserver.HRegionServer:
NotServingRegionException;
FileStructIndex,,1339032525500.7b229abcd0785408251a579e9bdf49c8. is closing
Best regards,
NN
2012/6/7 NNever nnever
So IncreasingToUpperBoundRegionSplitPolicy will do split when size reach
(square region-num)* flushSize until reach the maxfileSize.
We didn't config splitPolicy, will hbase0.94 use IncreasingToUpper
BoundRegionSplitPolicy as default?
2012/6/7 NNever nnever...@gmail.com
Finally I change
of the returned rowkeys to get
the Top N of these.
And then you get N records from t1 again.
At last, that's what I thought about, though I am not sure that this is
the most efficient way.
Kind regards,
Em
Am 05.06.2012 04:33, schrieb NNever:
Does the Schema like this:
T2{
rowkey: rs-time
row's blogposts-CF and trigger a
million writes in the index-table (which only writes keys and empty
values of 0byte length - I assume that's the cheapest write I can do).
Kind regards,
Em
Am 05.06.2012 08:07, schrieb NNever:
1. Endpoint is a kind of Coprocessor, it was added
.
Did you mean that or something different?
Kind regards,
Em
Am 05.06.2012 11:18, schrieb NNever:
Very clear now :).
Only one problem,
blog {//this is t1 of my example
blogposts {//the column family
05.05.2012_something { the blog post },//this is a column
I test it on 0.94.
In my testcode, there are 4 columns in a row.
If set ColumnCountGetFilter(100), the result is 4; or if set
ColumnCountGetFilter(2) , the result is still 4.
The 'limit' seems doesn't function well...
Yours,
nn
I'm sorry I made some mistake on it...
2012/6/4 NNever nnever...@gmail.com
I test it on 0.94.
In my testcode, there are 4 columns in a row.
If set ColumnCountGetFilter(100), the result is 4; or if set
ColumnCountGetFilter(2) , the result is still 4.
The 'limit' seems doesn't function well
'- I'd like to do the top N stuff on the server side to reduce traffic,
will this be possible? '
Endpoint?
2012/6/5 Em mailformailingli...@yahoo.de
Hello list,
let's say I have to fetch a lot of rows for a page-request (say
1.000-2.000).
The row-keys are a composition of a fixed id of an
Does the Schema like this:
T2{
rowkey: rs-time row
{
family:qualifier = t1's row
}
}
Then you Scan the newest 1000 from T2, and each get it's t1Row, then do
1000 Gets from T1 for one page?
2012/6/5 NNever nnever...@gmail.com
'- I'd like to do the top N stuff on the server side
Hi Amit, I meet this error on the client side when I upgrade 0.92.1 to
0.94.
I just put protobuf-java-2.4.9.a.jar into the classpath then the problem
sloved.
if you're sure you have protobuf on your CLASSPATH when the job runs, have
you just try restart M/R or even Hadoop?
I met some strange
Thanks J-D, good example.
Now it makes all clear to me. It helps a lot :)
NN
2012/5/24 Jean-Daniel Cryans jdcry...@apache.org
On Wed, May 23, 2012 at 8:11 PM, NNever nnever...@gmail.com wrote:
Thanks J-D.
so it means 'Append' keeps write-lock only and 'Put' keeps
write-lock/read-lock
Thanks Harsh, I'll try it ;)
---
Best regards,
nn
2012/5/24 Harsh J ha...@cloudera.com
NNever,
You can use asynchbase (an asynchronous API for HBase) for that need:
https://github.com/stumbleupon/asynchbase
On Thu, May 24, 2012 at 7:25 AM, NNever nnever
+write in order to add
something to a value. With Append the read is done in the region
server before the write, also it solves the problem where you could
have a race when there are multiple appenders.
J-D
On Tue, May 22, 2012 at 8:51 PM, NNever nnever...@gmail.com wrote:
Simple question
Simple question, what's the difference between Append and Put?
It seems they both can put some datas into a row.
Dose Append keep several write operations in atom but Put not?
if it is, then is Append is going to take place of Put? may Append slower
than Put?
Thanks~
Simple question, what's the difference between Append and Put?
It seems they both can put some datas into a row.
Dose Append keep several write operations in atom but Put not?
if it is, then is Append is going to take place of Put? may Append slower
than Put?
Thanks~
1. A pre-row lock is here during the update, so other clients will block
whild on client performs an update.(see HRegion.put 's annotaion), no
exception.
In the client side, while a process is updating, it may not reach the
buffersize so the other process may read the original value, I think.
2.
HBase to delete a whole row, no columns are
specified.
J-D
On Thu, Apr 5, 2012 at 6:32 PM, NNever nnever...@gmail.com wrote:
Dear all,
when trigger a preDelete in Coprocessor, is there any possible to
determine
this Delete is going to remove a row ( not just a family or qualifier
Tks, I'll try
2012/3/30 shixing paradise...@gmail.com
You must rewrite you own ruby script to support command like max, min
after deploy your endpoint.
On Fri, Mar 30, 2012 at 5:29 PM, NNever nnever...@gmail.com wrote:
Can I call endpoint from HBase Shell? and how~
Thanks
Anyone got time to make a test for this?
It really confuse me
2012/2/22 NNever nnever...@gmail.com
Attach is my test customFilter code --- TestFilter.
It just simply extends FilterBase and do some system.out...
You can just try any Table has more than one columnFamily like below:
*Scan
attachment.
Thanks
On Tue, Feb 21, 2012 at 5:47 PM, NNever nnever...@gmail.com wrote:
Attach is my test customFilter code --- TestFilter.
It just simply extends FilterBase and do some system.out...
You can just try any Table has more than one columnFamily like below:
*Scan scan = new Scan
Hi~
One customFilter, Override filterKeyValue(KeyValue v).
when the filter filterKeyValue a row's first keyValue, it will return
ReturnCode.NEXT_ROW to jump to next row.
But what infact is, the result changes when there are more than one
columnFamily:(here are some logs)
[filterRowKey]
you show us your filterRow() code ?
Thanks
On Feb 21, 2012, at 7:28 AM, NNever nnever...@gmail.com wrote:
Hi~
One customFilter, Override filterKeyValue(KeyValue v).
when the filter filterKeyValue a row's first keyValue, it will return
ReturnCode.NEXT_ROW to jump to next row
Hi~
On HBase0.92.
I write a coprosessor on table 'File'.In the prePut method, I put several
new rows on File HTable instance. As u know , this op will trigger prePut
again.
I use this logic to realize something like copy a File and auto copy its
all level's subfiles .
The code seems no wrong,
details in the hbase book (and perhaps somewhere online, too).
-d
On Mon, Feb 13, 2012 at 5:27 PM, NNever nnever...@gmail.com wrote:
Hello~
In HBase-0.92.0rc4.
If I need to skip some amount of rows from the scan result, how can I
define a custom filter to do it?
Here is my solution
Hi James, I'm new to HBase too.
How about this:
with a range of orderIds, select the first id.
Step1 : set this ID as startRow, then checkout the closest id(Only fetch
one),
Step2:then with this fetched ID, setStartRow(fetchedID-startTimestamp),
setEndRow(fetchedID-endTimestamp),
Step3:
, 2012 at 9:15 AM, NNever nnever...@gmail.com wrote:
As we know in HBase coprocessor methods such as prePut, we can operate
htable from ObserverContextRegionCoprocessorEnviroment...
But in many situations there will be some Tables with qualifier to record
the File URI. Then when we delete one
It works. Thanks Stack and Sanel~
2012/2/15 Stack st...@duboce.net
On Tue, Feb 14, 2012 at 6:35 AM, NNever nnever...@gmail.com wrote:
Thanks Sanel.
I try to use
*FileSystem fs = FileSystem.get(HBaseConfiguration.create());*
*fs.delete(new Path(...))*
in corpocessor's preDelete
41 matches
Mail list logo