time. Sometimes > 10
sec, rarely around 30 secs but most of the time < 10 secs. In the cases
the page loads slowly, there is a fair amount of load on Hbase.
I will get back on this issue when I have more information. Thanks so far.
Ferdy
Jean-Daniel Cryans wrote:
Prior to region server f
retty accurate and updates will not happen more frequent than once
every hour.
Ferdy
Jonathan Gray wrote:
Ferdy,
Another strategy might be to not issue the delete and just insert a new
version on top of the old one.
Whether this makes sense or not depends on whether the columns for that row
c
Hbase processes use incremental garbage collect options
-no swapping occurs ever
-we can circumvent the problem by using very long timeouts
I have a strong feeling it's network-related, because our non-hbase
hadoop jobs do generate a lot of DNS requests.
Ferdy
Michael Segel wrote:
This looks
fter, because the deleteTS of the correct
client will be smaller than the timestamp in the table.
Regards,
Ferdy
Erik Holstad wrote:
Hey Ferdy!
Not really sure what you are asking now. But if you do a deleteRow and then
a put in the same
milli second the put will be "shadowed" by the delet
Hey Erik,
Thanks for replying.
Do you mean a delete and a put in the same milli? Otherwise I don't
think I fully understand what your saying..
Ferdy.
Erik Holstad wrote:
Hey Ferdy!
There has been a lot of talk about this lately. HBase has a resolution of
milli seconds so
if you do
re is no ). The reason
why I'm asking is that we are probably experiencing missing row issues.
If so, is there a better way to do an update of a row and discarding old
column values?
Regards,
Ferdy
ou see any output on the console, it's means your hardware is
affected. If you see no output for several minutes (or perhaps one
hour), your machine is unlikely to be broken.
Hope this is of any help to you.
Ferdy
zward3x wrote:
Thanks for all help.
Will install u17, hope that this wil
ad.
dfs.socket.timeout
40
dfs.socket.timeout
40
Ferdy wrote:
Hi,
Increasing the memory did not work. Is it possible to increase the
time out period? All our jobs are offline processing jobs, so I'd
rather have a low responsiveness than a Zookeeper that decide
when there is no
load at all on Hbase, in other words they seem to occur randomly.
Ferdy
Jean-Daniel Cryans wrote:
I see more than one problem here.
DFSOutputStream ResponseProcessor exception for block
blk_-209323108721490976_6065406 java.net.SocketTimeoutException: 69000
As you said
? Can the files be safely
removed?
Hbase is version 0.20.2
Regards,
Ferdy
apRed jobs, with average IO)
Anyone a clue what might be going on?
Regards,
Ferdy.
THE REGIONSERVER LOG:
2010-01-20 13:52:15,982 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction
started. Attempting to free 22928768 bytes
2010-01-20 13:52:15,988
11 matches
Mail list logo