I don't think it's feasible to set a batch interval of 0.25ms. Even at
tens of ms the overhead of the framework is a large factor. Do you
mean 0.25s = 250ms?

Related thoughts, and I don't know if they apply to your case:

If you mean, can you just read off the source that quickly? yes.

Sometimes when people say "I need very low latency streaming because I
need to answer quickly" they are really trying to design a synchronous
API, and I don't think asynchronous streaming is the right
architecture.

Sometimes people really mean "I need to process 400 items per ms on
average", which is different and entirely possible.



On Wed, Mar 25, 2015 at 2:53 PM, RodrigoB <rodrigo.boav...@aspect.com> wrote:
> I've been given a feature requirement that means processing events on a
> latency lower than 0.25ms.
>
> Meaning I would have to make sure that Spark streaming gets new events from
> the messaging layer within that period of time. Would anyone have achieve
> such numbers using a Spark cluster? Or would this be even possible, even
> assuming we don't use the write ahead logs...
>
> tnks in advance!
>
> Rod
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Minimizing-batch-interval-tp22227.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to