On 21/06/12 14:33, Michael Segel wrote:
> I think the version issue is the killer factor here.
> Usually performing a simple get() where you are getting the latest version of
> the data on the row/cell occurs in some constant time k. This is constant
> regardless of the size of the cluster and s
Hi,
And thanks for your answers.
Actually, I'm already having control on my major compactions using a
cron, at night, merely execution this bash code:
echo "status 'detailed'" | hbase shell | grep "<>" | awk
-F, '{print $1}' | tr -d ' ' | sort | uniq -c | sort -nr | awk '{print
"major_compact
Hi all !
I'm getting trouble with my HBase as the following error appears more
and more often (each 2 to 15 mins on each node):
2012-06-25 10:25:30,646 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.120.0.5:50010,
storageID=DS-1339564791-127.0.0.1-50010-129615
On 21/06/12 14:33, Michael Segel wrote:
> I think the version issue is the killer factor here.
> Usually performing a simple get() where you are getting the latest version of
> the data on the row/cell occurs in some constant time k. This is constant
> regardless of the size of the cluster and s
Hi Fredric
hbase.store.delete.expired.storefile - Set this property to true.
This property helps you to delete the store files before compaction. If you
are interested you can check HBASE-5199.
It is available in 0.94 and above. Hope this helps.
Regards
Ram
> -Original Message-
> Fr
Hi
I wouldn't work on the hbase problems if hdfs isn't working properly
keep on the hdfs logs firstly.
org.apache.hadoop.hdfs.StateChange
BLOCK* BlockInfoUnderConstruction.initLeaseRecovery: No blocks found, lease
removed.
org.apache.hadoop.hdfs.StateChange
DIR* NameSystem.inter
Hello All,With the help of the Hadoop community we've identified why the data node died. However, I'm still trying to resolve my HBase issues.Is it normal that when a data node goes down it brings down 3 region servers with it? I was under the impression that the HBase region servers had some kind
On Mon, Jun 25, 2012 at 12:52 PM, Peter Naudus wrote:
> Is it normal that when a data node goes down it brings down 3 region
> servers with it? I was under the impression that the HBase region servers
> had some kind of failover mechanism that would prevent this. Since there
> are multiple copies
We're running CDH3 (hbase: 0.90.6, hadoop: 0.20.2).
As far as I'm aware, no one tried to shut down the server. I read online
that that the "user requested stop" error is sometimes logged on unknown
exceptions, not necessarily during an explicit shutdown
(apache-hbase.679495.n3.nabble.com/Di
On Mon, Jun 25, 2012 at 4:17 PM, Peter Naudus wrote:
> We're running CDH3 (hbase: 0.90.6, hadoop: 0.20.2).
>
> As far as I'm aware, no one tried to shut down the server. I read online
> that that the "user requested stop" error is sometimes logged on unknown
> exceptions, not necessarily during an
On Mon, Jun 25, 2012 at 9:00 AM, Frédéric Fondement
wrote:
> 2012-06-25 10:25:30,646 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(10.120.0.5:50010,
> storageID=DS-1339564791-127.0.0.1-50010-1296151113818, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.net.So
On Sun, Jun 24, 2012 at 11:22 PM, Jean-Marc Spaggiari
wrote:
> Hi,
>
> In HBASE-1512 (https://issues.apache.org/jira/browse/HBASE-1512) there
> is the implementation of co-processor for count and others.
>
> Is there anywhere an example of the way to use them? Because the shell
> count is very slo
On Sat, Jun 23, 2012 at 7:54 PM, Jean-Marc Spaggiari
wrote:
> Hi,
>
> There is some spam comments in the book
> (http://hbase.apache.org/book/submitting.patches.html) and I'm
> wondering if there is a way to remove them. I can take care of that if
> anyone point me to the right direction.
>
Thank
On Mon, Jun 25, 2012 at 1:34 AM, Frédéric Fondement
wrote:
> My question was actually: given a table with millions, billions or whatever
> number of rows, how fast is the TTL handling process ? How are rows scanned
> during major compaction ? Are they all scanned in order to know whether they
> sh
Sure thing, thanks so much for looking into this.I uploaded the files (pastebin didn't like the size of the files) here:http://www.linuxlefty.com/hbase-hbase-regionserver-003.loghttp://www.linuxlefty.com/hbase-hbase-regionserver-008.loghttp://www.linuxlefty.com/hbase-hbase-regionserver-009.logThese
Sure thing, thanks so much for looking into this.
I uploaded the files (pastebin didn't like the size of the files) here:
www.linuxlefty.com/hbase-hbase-regionserver-003.log
www.linuxlefty.com/hbase-hbase-regionserver-008.log
www.linuxlefty.com/hbase-hbase-regionserver-009.log
These are the log
Hi Elliott:
Great! I will look into it ~
Best Regards,
Jerry
On Thu, Jun 21, 2012 at 6:24 PM, Elliott Clark wrote:
> HFilePerformanceEvaluation is in the source tree hbase-server/src/test. I
> haven't played with it myself but it might help you.
>
> On Thu, Jun 21, 2012 at 3:13 PM, Jerry Lam
Prakrati,
I'm new to HBase myself, but I could interpret your results as
1) Enabling caching would only decrease retrieval time over time.
I.e. once you start retrieving the same rows over and over again. I'm
not sure from your results that your tests are actually trying to
retrieve cached resu
18 matches
Mail list logo