Hi,
I'm currently seeing an issue with the interaction between HBase and one of our
applications that seems to occur if a request is made against a region as its
undergoing a major compaction.
The application gets a list of rowkeys from an index table then for each block
of 1000 rowkeys gets
)
at java.util.TimerThread.run(Timer.java:505)
-Ian Brooks
On Tuesday 23 September 2014 06:52:34 Ted Yu wrote:
Here're the config parameters related to controlling snapshot timeout:
property
namehbase.snapshot.master.timeoutMillis/name
!-- Change from default of 60s
at
org.apache.hadoop.hbase.errorhandling.TimeoutExceptionInjector$1.run(TimeoutExceptionInjector.java:70)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
-Ian Brooks
On Tuesday 23 September 2014 06:52:34 Ted Yu wrote:
Here're the config parameters related to controlling snapshot timeout
at
org.apache.hadoop.hbase.errorhandling.TimeoutExceptionInjector$1.run(TimeoutExceptionInjector.java:70)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
-Ian Brooks
On Tuesday 23 September 2014 06:52:34 Ted Yu wrote:
Here're the config parameters related to controlling snapshot timeout
Hi,
I have a java client that connects to hbase and reads and writes data to
hbase. every now and then, I'm seeing the following stack traces in the
application log and I'm not sure why they are coming up.
org.apache.hadoop.hbase.client.ClusterStatusListener - ERROR - Unexpected
exception,
you check the version of netty that is in the classpath of your client ?
I wonder if it uses protobuf version other than 2.5.0 which is used by
hbase.
Cheers
On Wed, Oct 1, 2014 at 4:37 AM, Ian Brooks i.bro...@sensewhere.com wrote:
Hi,
I have a java client that connects to hbase
that is affecting this.
-Ian
On Wednesday 01 October 2014 09:12:05 Andrew Purtell wrote:
Thanks for reporting this. Please see
https://issues.apache.org/jira/browse/HBASE-12141. Hope I've
understood the issue correctly. We will look into it.
On Wed, Oct 1, 2014 at 4:37 AM, Ian Brooks i.bro
submit before seeing the following ?
Cheers
On Sep 23, 2014, at 2:28 AM, Ian Brooks i.bro...@sensewhere.com wrote:
Hi,
I'm seeing an issue on our hbase cluster which is preventing snapshots from
working. So far the only way i can get it working again is to restart all
);
for (Result r : results) {
for(KeyValue kv : r.raw()) {
System.out.print(new String(kv.getRow()) + );
}
}
-Ian Brooks
On Thursday 10 Jul 2014 16:38:04 Madabhattula Rajesh Kumar wrote:
Hi Team,
Could you please help me to resolve below issue.
In my hbase table, i've a 30 records. I need
to that, It happens more in post processing of the result set.
Thats my understanding of how it should be used, others may have different
feedback on this.
-Ian Brooks
On Thursday 10 Jul 2014 17:08:58 Madabhattula Rajesh Kumar wrote:
Hi Ian,
Thank you very much of the solution. Could you please
may want to add the
slaves 192.168.66.61 address to /etc/hosts
-Ian Brooks
On Wednesday 09 Jul 2014 15:29:44 Cosmin Cătălin Sanda wrote:
The port should not be needed if the default settings have not been
modified. I also don't see how that could be the problem since the
default hbase created
the
problems and get the cluster running again.
-Ian Brooks
On Tuesday 17 Jun 2014 14:11:15 Samir Ahmic wrote:
Here is explanation for WALTrailer from source code:
* A trailer that is appended to the end of a properly closed HLog WAL
file.
* If missing, this is either a legacy or a corrupted
:02,909 WARN [regionserver16020.logRoller] wal.FSHLog: Riding
over HLog close failure! error count=1
If the regions are marked as online but the shell won't let you do anything,
what is the best/correct way to get them back online again?
-Ian Brooks
'
to region=temp hfile=2867765
datanode logs on all servers was clean at the time of the crash and after.
hadoop version 2.4
hbase version 0.98.3
-Ian Brooks
On Tuesday 17 Jun 2014 13:43:45 Samir Ahmic wrote:
Hi Ian,
What hadoop fsck / says ? Maybe you have some corrupted data
, though it is 3 seconds later and all
hosts are time syncronised. There is nothing in the logs on the hosts to
suggest the clock was adjusted by any amount.
-Ian Brooks
On Wed, Jun 4, 2014 at 7:08 AM, Ian Brooks i.bro...@sensewhere.com wrote:
Hi
Well i performed the procedure on another 4
)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
at java.lang.Thread.run(Thread.java:744)
-Ian Brooks
On Tuesday 03 Jun 2014 15:59:11 Stack wrote:
On Tue, Jun 3, 2014 at 9:18 AM, Ian Brooks i.bro
[regionserver16020] ipc.RpcServer: Stopping
server on 16020
2014-06-03 13:05:48,624 INFO [RpcServer.handler=1,port=16020] ipc.RpcServer:
RpcServer.handler=1,port=16020: exiting
--
-Ian Brooks
Hi,
For my testing i'm only taking one server out, ( simulating process for
patching etc. ). The hadoop datanode process was left running at this point.
-Ian Brooks
On Tuesday 03 Jun 2014 06:06:33 Ted Yu wrote:
Please see http://hbase.apache.org/book/node.management.html
Especially 15.3.1.1
Hi,
Well checking the hadoop logs shows the datanode restarting at that time. looks
like a rouge puppet config decided to restart the datanode.
That said, should the regionserver not account for this and request the data
from another datanode?
-Ian Brooks
On Tuesday 03 Jun 2014 08:35:05
logs or the hbase
logs show any errors.
Any idea how best to track down how the rows are going missing?
--
-Ian Brooks
occurring ?
Thanks
On Wed, Mar 26, 2014 at 6:52 AM, Ian Brooks i.bro...@sensewhere.com wrote:
Hi,
I have a setup where data is fed into hbase using flume. When performing
inserts in blocks of 1 million, I have noticed that there is constantly
less than 1 million being inserted
no difference either.
-Ian
On Wednesday 26 Mar 2014 08:16:05 Stack wrote:
On Wed, Mar 26, 2014 at 8:06 AM, Ian Brooks i.bro...@sensewhere.com wrote:
Hi,
I'm using hbase version 0.96.1.1-hadoop2 and there are 16 regions across 8
servers.
I get similar results at lower numbers as well
, Mar 26, 2014 at 8:06 AM, Ian Brooks i.bro...@sensewhere.com wrote:
Hi,
I'm using hbase version 0.96.1.1-hadoop2 and there are 16 regions across 8
servers.
I get similar results at lower numbers as well, a run if 1000 rows into
flume results in 997 entries in hbase.
Anything
-streaming seems to be ok
with, but when it gets the the mapreduce.Job part of processing it still just
returns the whole table rather than the rows between the timeframe I am
specifying.
Is there a known way that I should be able to do this?
--
-Ian Brooks
Senior server administrator - Sensewhere
24 matches
Mail list logo