No Worries.
It was a stray hbase jar that caused the problem.
./zahoor
On 22-Oct-2012, at 10:43 PM, Bryan Beaudreault wrote:
> Oh, sorry.
>
> This sounds like a version mismatch. Do you have the same version
> installed on your servers and being pulled in your hadoop job?
>
> On Mon, Oct 22,
http://hbase.apache.org/book/cf.keep.deleted.html
Without it you cannot do correct as-of-time queries when it comes to deletes.
-- Lars
From: Michael Segel
To: user@hbase.apache.org; lars hofhansl
Sent: Monday, October 22, 2012 9:18 PM
Subject: Re: How to
Hi Henry
When you have only one region there is no specific start and end key for
it. They are just empty bytes.
Do you want to know the first row and the last row in that single empty
region? The you need to scan the entire region to know that.
I think there is a bit of confusion here as to ac
>
> Curious, why do you think this is better than using the keep-deleted-cells
> feature?
> (It might well be, just curious)
Ok... so what exactly does this feature mean?
Suppose I have 500 rows within a region. I set this feature to be true.
I do a massive delete and there are only 50 rows l
On Fri, Oct 19, 2012 at 5:22 PM, Amandeep Khurana wrote:
> Answers inline
>
> On Fri, Oct 19, 2012 at 4:31 PM, Dave Latham wrote:
>
>> I need to scale an internal service / datastore that is currently hosted on
>> an HBase cluster and wanted to ask for advice from anyone out there who may
>> have
> Here are a few of my thoughts:
>
> If possible, you might want to localize your data to a few regions if you can
> and then may be have exclusive access to those regions. This way, external
> load will not impact you. I have heard that write penalty of SSDs is quite
> high. But I think, they
That didn't work either. I still get the ScannerTimeoutException.
org.apache.hadoop.hbase.client.ScannerTimeoutException: 327644ms passed
since the last invocation, timeout is currently set to 6
at
org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1198)
at
o
I don't see multiple attempts for a single task in your log, it is one attempt
for each task. You should check how many maps your job is resulting into.
Multiple attempts have IDs like attempt_local_0001_m_04_1.
Thanks,
+Vinod
On Oct 22, 2012, at 7:05 AM, Bai Shen wrote:
> attempt_local_
Oh, sorry.
This sounds like a version mismatch. Do you have the same version
installed on your servers and being pulled in your hadoop job?
On Mon, Oct 22, 2012 at 12:32 PM, J Mohamed Zahoor wrote:
> Cool… But my map reduce doesn't even start…
> It fails while creating a record reader...
> The
Hi,
Try setting hbase.rpc.timeout in the client.
More info here in #12.5.2:
http://hbase.apache.org/book/trouble.client.html
/Victor
2012/10/22 Bai Shen
> No, I'm not. I tried changing it to hbase.regionserver.lease.period, but
> that's not being picked up either.
>
> What is the proper sett
Cool… But my map reduce doesn't even start…
It fails while creating a record reader...
The record reader fails in
TableMapReduceUtil.convertStringToScan(conf.get(SCAN));
and throws a
java.io.IOException: version not supported
at org.apache.hadoop.hbase.client.Scan.readFields(Scan.java:5
I'm not on 0.94.1, but I've found a lot of situations that can cause
scanner timeouts and other scanner exceptions from M/R. The primary ones
probably still apply in later versions:
- Caching or batching set too high. If caching is set to, e.g. 1000,
and hbase.rpc.timeout is set to 30 sec
I am using 0.94.1
./zahoor
On 22-Oct-2012, at 9:17 PM, J Mohamed Zahoor wrote:
> Hi
>
> I am facing a scanner exception like this when i run a mr job.
> Both the input and output are hbase tables (different tables)…
> This comes sporadically on some mapper and all other mappers runs fine..
>
Hi
I am facing a scanner exception like this when i run a mr job.
Both the input and output are hbase tables (different tables)…
This comes sporadically on some mapper and all other mappers runs fine..
Even the failed mapper gets passed in the next attempt.
Any clue on what might be wrong?
java.l
I'll give that a try, but I don't recall getting any LeaseExceptions. All
of the ones I saw were ScannerTimeoutExceptions.
On Mon, Oct 22, 2012 at 11:02 AM, Victor Jerlin wrote:
> Hi,
>
> Try setting hbase.rpc.timeout in the client.
>
> More info here in #12.5.2:
> http://hbase.apache.org/book/
Thanks for your responses.
yong
On Mon, Oct 22, 2012 at 3:05 PM, Anoop Sam John wrote:
> To be precise there will be one memstore per family per region..
> If table having 2 CFs and there are 10 regions for that table then totally
> 2*10=20 memstores..
>
> -Anoop-
> ___
No, I'm not. I tried changing it to hbase.regionserver.lease.period, but
that's not being picked up either.
What is the proper setting for modifying the timeout period? I keep
getting ScannerTimeoutExceptions.
Thanks.
On Fri, Oct 19, 2012 at 12:17 PM, Jean-Daniel Cryans wrote:
> That config i
To be precise there will be one memstore per family per region..
If table having 2 CFs and there are 10 regions for that table then totally
2*10=20 memstores..
-Anoop-
From: Kevin O'dell [kevin.od...@cloudera.com]
Sent: Monday, October 22, 2012 5:55 PM
To:
Yes, there will be two memstores if you have two CFs.
On Oct 22, 2012 7:25 AM, "yonghu" wrote:
> Dear All,
>
> In the description it mentions that a Store file (per column family)
> is composed of one memstore and a set of HFiles. Does it imply that
> for every column family there is a correspond
19 matches
Mail list logo