t; >
> >> >True. But this seem to be a separate issue:
> >> >https://issues.cloudera.org/browse/FLUME-390.
> >> >
> >> >Alex Baranau
> >> >
> >> >Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - Hadoop -
&
ssues.cloudera.org/browse/FLUME-390.
>> >
>> >Alex Baranau
>> >
>> >Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - Hadoop -
>> HBase
>> >
>> >On Tue, Jun 28, 2011 at 7:40 PM, Doug Meil
>> >wrote:
>> >
>> &
- Nutch - Hadoop -
> HBase
> >
> >On Tue, Jun 28, 2011 at 7:40 PM, Doug Meil
> >wrote:
> >
> >> I agree with what Todd & Gary said. I don't like retry-forever,
> >> especially as a default option in HBase.
> >>
> >>
> >> -
Cc: Jonathan Hsieh
> Sent: Tuesday, June 28, 2011 9:40 AM
> Subject: RE: Retry HTable.put() on client-side to handle temp connectivity
> problem
>
> I agree with what Todd & Gary said. I don't like retry-forever,
> especially as a default option in HBase.
>
>
1 at 7:40 PM, Doug Meil
>wrote:
>
>> I agree with what Todd & Gary said. I don't like retry-forever,
>> especially as a default option in HBase.
>>
>>
>> -Original Message-
>> From: Gary Helmling [mailto:ghelml...@gmail.com]
>> Sent: Tue
y Helmling [mailto:ghelml...@gmail.com]
> Sent: Tuesday, June 28, 2011 12:18 PM
> To: dev@hbase.apache.org
> Cc: Jonathan Hsieh
> Subject: Re: Retry HTable.put() on client-side to handle temp connectivity
> problem
>
> I'd also be wary of changing the default to retry forever
Table.put() on client-side to handle temp connectivity
problem
I'd also be wary of changing the default to retry forever. This might be hard
to differentiate from a hang or deadlock for new users and seems to violate
"least surprise".
In many cases it's preferable to
I'd also be wary of changing the default to retry forever. This might be
hard to differentiate from a hang or deadlock for new users and seems to
violate "least surprise".
In many cases it's preferable to have some kind of predictable failure as
well. So I think this would appear to be a regress
With Flume's store-and-forward, why do we need retry-forever in the HBase
side? It seems to me that if the sink "dies" for some reason, then it should
push that back to the upstream parts of the flume dataflow, and have them
buffer data on local disk.
-Todd
On Mon, Jun 27, 2011 at 1:56 PM, Alex B
If I could override the default, I'd be a hesitant +1. I'd rather see
the default be something like retry 10 times, then throw an error.
With one option being infinite retries.
-Joey
On Mon, Jun 27, 2011 at 2:21 PM, Stack wrote:
> I'd be fine with changing the default in hbase so clients just ke
I'd be fine with changing the default in hbase so clients just keep
trying. What do others think?
St.Ack
On Mon, Jun 27, 2011 at 1:56 PM, Alex Baranau wrote:
> The code I pasted works for me: it reconnects successfully. Just thought it
> might be not the best way to do it.. I realized that by us
The code I pasted works for me: it reconnects successfully. Just thought it
might be not the best way to do it.. I realized that by using HBase
configuration properties we could just say that it's up to user to configure
HBase client (created by Flume) properly (e.g. by adding hbase-site.xml with
s
Either should work Alex. Your version will go "for ever". Have you
tried yanking hbase out from under the client to see if it reconnects?
Good on you,
St.Ack
On Mon, Jun 27, 2011 at 1:33 PM, Alex Baranau wrote:
> Yes, that is what intended, I think. To make the whole picture clear, here's
> th
Yes, that is what intended, I think. To make the whole picture clear, here's
the context:
* there's a Flume's HBase sink (read: HBase client) which writes data from
Flume "pipe" (read: some event-based messages source) to HTable;
* when HBase is down for some time (with default HBase configuration
This would retry indefinitely, right ?
Normally maximum retry duration would govern how long the retry is
attempted.
On Mon, Jun 27, 2011 at 1:08 PM, Alex Baranau wrote:
> Hello,
>
> Just wanted to confirm that I'm doing things in a proper way here. How
> about
> this code to handle the temp clus
Hello,
Just wanted to confirm that I'm doing things in a proper way here. How about
this code to handle the temp cluster connectivity problems (or cluster down
time) on client-side?
+// HTable.put() will fail with exception if connection to cluster is
temporarily broken or
+// cluster is
16 matches
Mail list logo