Thanks J-D. I filed https://issues.apache.org/jira/browse/HBASE-3014 to
suggest changing the log level to WARN.
On Fri, Sep 17, 2010 at 10:32 AM, Jean-Daniel Cryans wrote:
> I agree it needs some clarification, since that stuff evolved in
> disparate ways. Historically UnknownScannerException ha
Hi folks,
Got a problem in basic Hadoop-Hbase communication. My small test program
ProteinCounter1.java - shown in full below - reports out this error
java.lang.RuntimeException: java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.mapreduce.TableOutputFormat
at org.apache.hado
Hi,
I was trying to find out if the hbase can be used in real-time processing
scenario. In order to
do so, I set the in_memory for a table to be true, and set the TTL for the
table to 10 minuets.
The data comes in chronnological order. I let the test to run for 1 day.
The idea is that we are o
Thanks for comment.
2010/9/18 Jean-Daniel Cryans :
> That exception is "normal", it just means a region is either moving or
> being split, unless your client has to retry 10 times and then gets a
> RetriesExhaustedException. This has been discussed a few times on this
> mailing list.
I have custo
That exception is "normal", it just means a region is either moving or
being split, unless your client has to retry 10 times and then gets a
RetriesExhaustedException. This has been discussed a few times on this
mailing list.
Also you say you got HBASE-2516, but if you really got it then you'd
see
I need to massive data rewrite in some family on standalone server. I
got org.apache.hadoop.hbase.NotServingRegionException
or java.io.IOException: Region xxx closed if I write and read at the same time.
What i shall do in 0.20.6 version? One thing that i can try - write to
another family and then
I agree it needs some clarification, since that stuff evolved in
disparate ways. Historically UnknownScannerException has been fatal
and wasn't recovered from. Right now, the client will recover only if
the timeout hasn't expired (so you get this only when the region moves
or it took more than 60 s
J-D:
public class UnknownScannerException extends DoNotRetryIOException {
When e (IOException) below was an UnknownScannerException, the code would
try to restart.
I have two questions:
1. what contract should recipient of DoNotRetryIOException follow ? On the
surface the way TableRecordReaderImp
I continued the test yesterday, letting data in the table with 10 minutes
TTL sitting there
and it has passed 24 hours. I checked the table, the data are still there. I
checked the log,
the major_compaction didn't happen for this table for the last 24 hours.
I realized that I have other tables
On Fri, Sep 17, 2010 at 9:14 AM, Scott Whitecross wrote:
> Hi all -
>
> A couple of nights ago I enabled cron jobs to run major compactions against
> a few of the tables that I use in HBase. This has caused multiple worker
> machines on the cluster to fail. Based on the compaction or losing the
Well I don't see that "recovered from" message in my logs, but I may not
have enabled DEBUG correctly. Still trying to mess with that...
I'm still a little confused. Would the UnknownScannerException show up
under normal conditions (doing next() in under 60 seconds)? I'm hoping that
everything t
Sounds like there's an underlying HDFS issue, you should check those
machines' datanode logs at the time of the failure for any exception.
J-D
On Fri, Sep 17, 2010 at 9:14 AM, Scott Whitecross wrote:
> Hi all -
>
> A couple of nights ago I enabled cron jobs to run major compactions against
> a f
Hi all -
A couple of nights ago I enabled cron jobs to run major compactions against
a few of the tables that I use in HBase. This has caused multiple worker
machines on the cluster to fail. Based on the compaction or losing the
worker nodes, many of the regions are stuck in transition with a st
It looks like your region took at long time to move (the name of the
region being dn,,1284706796459), you should grep the master log to see
why it took so long, and correlate with the region server log at
116.211.20.208.
Also please give the usual infos about your environment (versions,
hardware,
> Unfortunately it confirmed my suspicion that current TTL is
> implemented
> purely based on active compaction. And in log
> table/history data table, current implementation is not
> sufficient.
You continue to make that statement but it not an accurate statement.
HBase respects TTL when return
Hi all,
I am looking for suggestion for using hbase for real time motoring,
We have a heavy traffic (around 6 million logs per hour), I look at
the benchmark of hbase, it seems it is near the edge of hbase's
insertion performance. And I meet some problem when doing heavy
insertion operation ( demon
so any idea how to achieve this using Thrift??
On Fri, Sep 17, 2010 at 2:13 PM, Andrey Stepachev wrote:
> my bad. thrift doesn't support this.
>
> 2010/9/17 Shuja Rehman :
> > Andrey
> >
> > I have checked this filters but i think they can be used with java client
> > API, are you confirmed that
my bad. thrift doesn't support this.
2010/9/17 Shuja Rehman :
> Andrey
>
> I have checked this filters but i think they can be used with java client
> API, are you confirmed that these can be used with thrift api?
>
> On Thu, Sep 16, 2010 at 5:29 PM, Andrey Stepachev wrote:
>
>> DateTime easy, sc
HI all,
I try to do heavy insertion. but meets come exception as follows, I
guess maybe I should do some optimization do avoid such kind of
exception, anyone can give some suggestion ?
Thanks in advance.
[java] 10/09/17 17:09:03 DEBUG
client.HConnectionManager$TableServers: Reloading region
d
19 matches
Mail list logo