Finally, the mystery has been solved.
Small remark before I explain everything.
The situation with only region is absolutely the same:
Fzzy: 1Q7iQ9JA
Next fzzy: F7dtxwqVQ_Pw -- the value I'm trying to find.
Fzzy: F7dt8QWPSIDw
Somehow FuzzyRowFilter has just omit my value here.
So, the
Hi Tian,
What is replication factor you mention in hdfs.
Regards,
Varun Kumar.P
On Mon, Jan 21, 2013 at 12:17 PM, tgh guanhua.t...@ia.ac.cn wrote:
Hi
I use hbase to store Data, and I have an observation, that is,
When hbase store 1Gb data, hdfs use 10Gb disk space, and when
On Mon, Jan 21, 2013 at 1:46 PM, Eugeny Morozov
emoro...@griddynamics.comwrote:
I do not
understand why with FuzzyFilter it goes from one region to another until it
stops at the value. I suppose if scanning process has started at once on
all regions
Scanning process does not start parallely
I suppose if scanning process has started at once on
all regions, then I would find in log files at least one value per region,
but I have found one value per region only for those regions, that resides
before the particular one.
@Eugeny - FuzzyFilter like any other filter works at the server
Anoop, Ramkrishna
Thank you for explanation! I've got it.
On Mon, Jan 21, 2013 at 12:59 PM, Anoop Sam John anoo...@huawei.com wrote:
I suppose if scanning process has started at once on
all regions, then I would find in log files at least one value per region,
but I have found one value
Tnx,But I don't know why when the client.buffer.size is increased, I've got
bad result,does it related to other parameters ? and I give 8 gb heap to
each regionserver.
On Mon, Jan 21, 2013 at 12:34 PM, Harsh J ha...@cloudera.com wrote:
Hi Farrokh,
This isn't a HDFS question - please ask these
This error is strange.
The sleep method is there in Threads for a long time now. Ok it was
(int millis) before, and it's (long millis) now but should not do such
a difference.
tsuna, how is your setup configured? Do you run KZ locally? Or
standalone? What jars do you have for HBase and ZK?
It
Hi,
We have a setup of HIVE on Sanbox environment the queries works fine on that
and there are no errors, We have the same setup on Production we are getting
the following error from Hbase.
This errors are showing over and over again. Any idea why this error might be
occurring. Since on one
I have issue below when I'm runing hbck:
ERROR: Found lingering reference file
hdfs://node3:9000/hbase/entry_proposed/fbd1735591467005e53f48645278b006/recovered.edits/00091843039.temp
and I'm wondering what it means...
Thanks,
JM
On behalf of the Apache HBase PMC, I am excited to welcome Jimmy Xiang
and Nicholas Liochon as members of the Apache HBase PMC.
* Jimmy (jxiang) has been one of the drivers on the RPC protobuf'ing
efforts, several hbck repairs, and the current revamp of the
assignment manager.
* Nicolas
Congratz Jimmy and Nicholas... well deserved for both of you.
On Mon, Jan 21, 2013 at 3:56 PM, Jonathan Hsieh j...@cloudera.com wrote:
On behalf of the Apache HBase PMC, I am excited to welcome Jimmy Xiang
and Nicholas Liochon as members of the Apache HBase PMC.
* Jimmy (jxiang) has been
Awesome work!
On Mon, Jan 21, 2013 at 3:59 PM, Patrick Angeles
patrickange...@gmail.comwrote:
Congratz Jimmy and Nicholas... well deserved for both of you.
On Mon, Jan 21, 2013 at 3:56 PM, Jonathan Hsieh j...@cloudera.com wrote:
On behalf of the Apache HBase PMC, I am excited to welcome
Congrats fellas - great work!
- Jesse Yates
On Jan 21, 2013, at 12:56 PM, Jonathan Hsieh j...@cloudera.com wrote:
On behalf of the Apache HBase PMC, I am excited to welcome Jimmy Xiang
and Nicholas Liochon as members of the Apache HBase PMC.
* Jimmy (jxiang) has been one of the drivers on
Good on you lads!
St.Ack
On Mon, Jan 21, 2013 at 12:56 PM, Jonathan Hsieh j...@cloudera.com wrote:
On behalf of the Apache HBase PMC, I am excited to welcome Jimmy Xiang
and Nicholas Liochon as members of the Apache HBase PMC.
* Jimmy (jxiang) has been one of the drivers on the RPC
On Mon, Jan 21, 2013 at 12:01 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Found lingering reference file
The comment on the method that is finding the lingering reference files is
pretty good:
http://hbase.apache.org/xref/org/apache/hadoop/hbase/util/HBaseFsck.html#604
It looks
Hum. It's still a bit obscur for me how this happend to my cluster...
-repair helped to fix that, so I'm now fine. I will re-run the job I
ran and see if this is happening again.
Thanks,
JM
2013/1/21, Stack st...@duboce.net:
On Mon, Jan 21, 2013 at 12:01 PM, Jean-Marc Spaggiari
Did you get the name of the broken reference? I'd trace its life in
namenode logs and in regionserver log by searching its name (You might have
to find the region in master logs to see where region landed over time).
The reference name includes the encoded region name as a suffix. This is
the
RECOVERED_EDITS is not a column family. It should be ignored by hbck.
Filed a jira:
https://issues.apache.org/jira/browse/HBASE-7640
Thanks,
Jimmy
On Mon, Jan 21, 2013 at 2:36 PM, Stack st...@duboce.net wrote:
Did you get the name of the broken reference? I'd trace its life in
namenode
On Mon, Jan 21, 2013 at 2:45 PM, Jimmy Xiang jxi...@cloudera.com wrote:
RECOVERED_EDITS is not a column family. It should be ignored by hbck.
Filed a jira:
https://issues.apache.org/jira/browse/HBASE-764https://issues.apache.org/jira/browse/HBASE-7640
Thanks Jimmy. That makes sense now
Ok, so basically, there was no issues with my tables? I did not used
any specific keywors for my CF... They are all called @ or A ;)
2013/1/21, Stack st...@duboce.net:
On Mon, Jan 21, 2013 at 2:45 PM, Jimmy Xiang jxi...@cloudera.com wrote:
RECOVERED_EDITS is not a column family. It should be
Thank you for your reply
I set the factor =1 , that is ,no replication there , I use it for research
,
And I get an observation , that is,
When you store a small number of data into hbase , hbase will use a huge
disk space, i.e., when hbase store 3million messages, which use 1GB disk as
Thanks for the useful information. I wonder why you use only 5G heap when
you have an 8G machine ? Is there a reason to not use all of it (the
DataNode typically takes a 1G of RAM)
On Sun, Jan 20, 2013 at 11:49 AM, Jack Levin magn...@gmail.com wrote:
I forgot to mention that I also have this
On Mon, Jan 21, 2013 at 5:10 PM, Varun Sharma va...@pinterest.com wrote:
Thanks for the useful information. I wonder why you use only 5G heap when
you have an 8G machine ? Is there a reason to not use all of it (the
DataNode typically takes a 1G of RAM)
On Sun, Jan 20, 2013 at 11:49 AM,
BTW, here's a list with all current PMC members:
http://people.apache.org/committers-by-project.html#hbase-pmc
From: Jonathan Hsieh j...@cloudera.com
To: user@hbase.apache.org; d...@hbase.apache.org
Sent: Monday, January 21, 2013 12:56 PM
Subject: [ANNOUNCE]
Thats right. Its a bug in hbck that it thinks recovered.edits a cf.
St.Ack
On Mon, Jan 21, 2013 at 4:03 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Ok, so basically, there was no issues with my tables? I did not used
any specific keywors for my CF... They are all called @ or A ;)
25 matches
Mail list logo