Hi Bhaskar,
It could be a good solution, thanks :)

Le 29/01/2015 16:00, Bhaskar V. Karambelkar a écrit :
Loic,
One thing you can do is write data to a queue (say Redis) using a non blocking client, and use a redis source to feed to flume. That'll give you non blocking push operations on the client side. It does add another component to the mix however.
We do something similar, and it scales quite well.

On Thu, Jan 29, 2015 at 1:45 AM, Loic Descotte <[email protected] <mailto:[email protected]>> wrote:

    Thanks for the link, I'll take a look.

    2015-01-29 7:43 GMT+01:00 Hari Shreedharan
    <[email protected] <mailto:[email protected]>>:

        In your case, you'd end up having too many threads - that is
        right. I don't know if there is anything you can do right now

        The interesting thing is that the underlying Flume Avro RPC
        client is actually non-blocking. It is exposed as a blocking
        client for ease of use. Also, there is a bug in Avro that can
        cause the client to block during an initial handshake -
        https://issues.apache.org/jira/browse/AVRO-1122

        For now, I am not sure what we can do. But if you do happen to
        have some time, please take a look!


        Hari

        On Wed, Jan 28, 2015 at 10:35 PM, Loic Descotte
        <[email protected] <mailto:[email protected]>> wrote:

            Of course this kind of problem can always be solved on the
            server side, but it would be better be able to handle
            latency with non blocking IO on the client. In particular
            if you are working with asynchronous web frameworks like
            Play Framework (or Spray.io) that work better with a low
            number of threads.






-- Loïc Descotte
    http://about.me/loicd



Reply via email to