Thanks, thats what i have
com.typesafe.akka
akka-stream-kafka_2.12
0.15
and i can see i only have actor 2.12 (which is 2.4.17)
Thanks
Shannon
On Friday, April 28, 2017 at 10:07:55 AM UTC-5, Akka Team wrote:
>
> You should combine the same versions of Scala. If you use
>
Oh, right. Thanks. I understand now why the custom stage was written.
On Fri, Apr 28, 2017 at 9:03 AM, Akka Team wrote:
> The problem is that Future is not really suited for asynchronous work
> since there is no way to chain actions onto it, the only thing you can do
>
I think that would be more API surface than we'd like, but please open a
ticket over at https://github.com/akka/akka/issues and we can discuss if it
is worth doing there.
--
Johan
Akka Team
On Tue, Mar 14, 2017 at 11:14 PM, Richard Ney
wrote:
> I was wondering if anyone
The problem is that Future is not really suited for asynchronous work
since there is no way to chain actions onto it, the only thing you can do
is to poll or block until it is completed. It would have to be
CompletableFuture.
--
Johan
Akka Team
On Fri, Apr 28, 2017 at 5:54 PM, Richard Rodseth
I tried to reproduce your problem, but I cannot see any problems with the
predicate not being honoured. Here's a gist with what I did to reproduce
it: https://gist.github.com/johanandren/7d01d19211867df0c308ba5fb1294162
--
Johan
Akka Team
On Mon, Apr 24, 2017 at 10:35 AM, Ankit Thakur
Thanks. There *is* a version of send in Kafka that returns a Java Future.
public java.util.concurrent.Futurehttps://kafka.apache.org/090/javadoc/org/apache/kafka/clients/producer/RecordMetadata.html>>
send(ProducerRecord
That is correct, there is one writer and one reader per connection in the
stable remoting, so if serialization is slow/heavy that may be a bottleneck.
In the new remoting, "artery", we have made it possible to run multiple in
and outbound "lanes", but that functionality is not yet hardened enough
Looks like you are doing the right thing there, consuming the entity before
responding, could it be that the create methods throw an exception perhaps?
That would lead to the request body not being consumed as far as I can see.
--
Johan
Akka Team
On Thu, Feb 16, 2017 at 6:42 AM, Vasiliy Levykin
You should combine the same versions of Scala. If you use
akka-stream-kafka_2.12 then the Akka version you use (and all other
libraries written in Scala you use in fact) must have the same version of
Scala since the major versions of Scala (2.11, 2.12) are not binary
compatible. In addition to
Viktor,
On Fri, Apr 28, 2017 at 5:03 PM, Viktor Klang
wrote:
>
>
> On Fri, Apr 28, 2017 at 1:12 PM, Shiva Ramagopal
> wrote:
>
>> Hi Viktor,
>>
>> On Fri, Apr 28, 2017 at 2:55 PM, Viktor Klang
>> wrote:
>>
>>> Hi Shiva,
>>>
With 2.12, i had to update the code,
return new ProducerMessage.Message(
new ProducerRecord("akkatest",
msg.record().key(), msg.record().value()), msg.committableOffset());
but getting
Exception in
My Kafka version : 0.10.1.1, i should use akka-stream-kafka_2.12, right?
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives:
The Cassandra session provides an async method session.executeAsync which
returns a CompletableFuture/Future which makes it usable with mapAsync
while Kafka has a callback based async api, where you trigger an action and
pass a callback to execute when the action has completed. It would not be
Looks like it is the message type, from another example when i do this, it
passes compilation.
s.map(new
Function,
ProducerMessage.Message>() {
public
The RequestContext only contains the part of the path that is not yet
matched, it doesn't really know anything else about how you consumed the
path. "hello"/Segment/"world" would consume up to the end of "world", so I
cannot see how you would achieve the getName method as you describe it. I
think
Hi,
When I try those exact sources it does gracefully shutdown after processing
all of them.
Your what you want it to do does not match what it actually does however,
if you want each state to process sequentially in an actor you will need to
have some protocol for completion of the processing
Here is my code
final ActorSystem system = ActorSystem.create();
ActorMaterializer materializer = ActorMaterializer.create(system);
final ConsumerSettings consumerSettings =
ConsumerSettings.create(system, new SpecificAvroDeserializer(), new
Thanks, yes i am putting the materializer,
runWith(Producer.commitableSink(producerSettings), materializer);
Is this a java and/or scala version issue?
i am using
com.typesafe.akka
akka-stream-kafka_2.11
0.11-RC2
and scala library 2.11.4
On Thursday, April 27, 2017 at 8:52:03 PM
If you look at the sources of Pool.props(props) all it does is calling
props.withRouter(this), so the two options are pretty much identical so you
can choose whichever you like best.
--
Johan
Akka Team
On Wed, Apr 12, 2017 at 6:27 PM, Tu Pham Phuong wrote:
> My system run
On Fri, Apr 28, 2017 at 1:12 PM, Shiva Ramagopal wrote:
> Hi Viktor,
>
> On Fri, Apr 28, 2017 at 2:55 PM, Viktor Klang
> wrote:
>
>> Hi Shiva,
>>
>> On Fri, Apr 28, 2017 at 11:20 AM, Shiva Ramagopal
>> wrote:
>>
>>> I'm looking to
Hi Viktor,
On Fri, Apr 28, 2017 at 2:55 PM, Viktor Klang
wrote:
> Hi Shiva,
>
> On Fri, Apr 28, 2017 at 11:20 AM, Shiva Ramagopal
> wrote:
>
>> I'm looking to compare Kafka Streams vs Akka Streams in two areas:
>>
>> 1. For ingesting between Kafka and
Note that the Java akka-testkit and multi-node-testkit support are
different things though (ticket 18109 is about akka-testkit). Support for
multi-ivm tests in Java would require both new Java APIs and tooling
support (maven plugin?). The tooling support part is quite complicated and
the reason
Hi guys,
Do I understand correctly that we have a single instance of
`EndpointReader` actor per connection ?
Thus deserialisation of remote messages being done sequentially, so we can
hit this bottleneck much faster then a network throughput limits?
--
>> Read the docs:
Hi Shiva,
On Fri, Apr 28, 2017 at 11:20 AM, Shiva Ramagopal wrote:
> I'm looking to compare Kafka Streams vs Akka Streams in two areas:
>
> 1. For ingesting between Kafka and HDFS/RDBMS
>
> Requirements are mainly around performance and latency. A Kafka topic can
> have
I'm looking to compare Kafka Streams vs Akka Streams in two areas:
1. For ingesting between Kafka and HDFS/RDBMS
Requirements are mainly around performance and latency. A Kafka topic can
have several million events, each corresponding to a database change
capture. When ingesting this topic into
25 matches
Mail list logo