Hi Jakub,
On Tue, Mar 3, 2015 at 1:07 PM, Jakub Liska liska.ja...@gmail.com wrote:
Hey,
I'm trying to design a stream processing of hundreds of thousands of files
row by row, reading files lazily. It comes with the obligation to close the
InputStream at the end so that creating an
Hi Giovanni,
1) The type needs to exist somewhere, and if your code doesn't care about it,
you can accept a Source[T, _]
2) For consistency, one of the sides where chosen, and it's not that uncomon
that you are interested in the materialized value of the Source. If you are
interested in the
Hi,
Sorry in advance if this has already been answered somewhere, I couldn't
find it...
I have a PersistentActor, and I use persistAsync to persist its events. Now
I want to add snapshots into the mix. So from what I understand from the
documentation, having a snapshot with a timestamp T
Hi Tim,
I guess you are talking about implementing your own SnapshotStore.
1) If you are trying to implement your own, then you can do whatever you like
below the surface of the SnapshotStore API, but you can't (yet) have multiple
snapshot stores (https://github.com/akka/akka/issues/15587)
Hi,
If you're on 1.0-M4, have you looked at runWith on Flow that takes both a
Source and a Sink and give you a Tuple of the materialized values?
B/
On 1 March 2015 at 23:23:58, Jelmer Kuperus (jkupe...@gmail.com) wrote:
Suppose you have a akka stream backed by an ActorPublisher that listens
On Tue, Mar 3, 2015 at 11:37 AM, Björn Antonsson
bjorn.antons...@typesafe.com wrote:
Hi Brice.
I just noticed the other discussion. You are right, the Region that is
persisted contains an actor ref.
and that serialized ref contains full address information and uid, so
eventually it will be
Hi Brice,
Are you sure that it is the Sharding that is the issue and not something in the
messages that you send to the sharded actors? As far as I can see, the sharding
itself only persists string IDs of the entities and if you don't include any
address specific information in there or
Thanks Jim.
W dniu wtorek, 3 marca 2015 01:29:04 UTC+1 użytkownik Jim Hazen napisał:
I think the answer is in the {Scala, Java}Docs of mapAsync:
Transform this stream by applying the given function to each of the
elements as they pass through this processing step. The function returns a
1) embarrassed cough. I kind had a feeling you couldn't do it but I
completely forgot I reported it!
Thanks!
Tim
On 3 March 2015 at 15:52, Björn Antonsson bjorn.antons...@typesafe.com
wrote:
Hi Tim,
I guess you are talking about implementing your own SnapshotStore.
1) If you are trying to
Hi,
First of, there is no support for serializing parts of a graph and sending them
to other nodes for execution in akka-streams. That would be awesome to have,
but it's not there yet.
Second, your shape in the graph is no longer a UniformFanOutShape, since you
have connected a Source to the
Hi Soumya,
Yes, that can definitely be achieved.
I've pulled some code out from a test and put it in a gist here
https://gist.github.com/bantonsson/881f831db93ec474f9bd
There is a fair ammount of scaffolding, including creating a binary blob file
and uploading it to the server via curl.
The
Sorry if perhaps too general for this group but I've been struggling with
somewhere appropriate to ask.
I have done quite a bit of reading but haven't found anywhere yet that
answers my questions in a way my small brain can understand.
I understand what Asynchronous and Non-Blocking I/O
I just ran into this issue. What's the drawback of
setting akka.io.tcp.windows-connection-abort-workaround-enabled = off?
--
Read the docs: http://akka.io/docs/
Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
Search the archives:
Is there an ETA for akka-http on 2.4?
According to Mathias, Spray won't be ported to 2.4:
https://groups.google.com/d/msg/spray-user/x0KdMn_7exE/B8Rp2xuSa2sJ and
according to you akka-http also isn't yet ready for 2.4.
I'd like to develop a REST service that takes advantage of cluster sharding
Thank you for your response. I have to admit the use-case I was trying for
is unusual and it was dependent on being able to serialize parts of the
graph to other nodes. I did see the nice cookbook sample of balancing out
processing to a number of workers, but in my case there would be no merge
Hi Adam,
I’m sorry to say that we currently have no plans to participate in the Google
Summer of Code.
Regards,
Roland
2 mar 2015 kl. 22:49 skrev adam kozuch adam.koz...@gmail.com:
Hello,
I would like to ask if anyone from Akka Team would like to be Google Summer
of Code mentor this
Hi Jim,
Here is the original mailing list discussion
https://groups.google.com/forum/#!topic/akka-user/WdXWjcnVWiQ
and the corresponding ticket https://github.com/akka/akka/issues/15766
In short when you set it to off on windows, and a client aborts the server
might in some cases not notice
Hey,
when ActorPublisher does :
onError(exceptionRegisteredInSupervisionDecider) then the stream just
fails with that exception. Supervision strategy doesn't work here. Is it
supposed to or it won't work for ActorPublishers?
--
Read the docs: http://akka.io/docs/
Check the FAQ:
Hi Luis,
It should not stuck but throw, but this will not work:
broadcast.out(i) ~ worker ~ merge.in(i)
You imported worker once, you cannot use it N times. You can either use
builder.add to add as many times as you need (the parametric import you
used only matters if you want to
Hi Björn,
I am not sure if i understand you.
The piece of software i am created listens to messages posted to topics,
when a message arrives we perform some operation (call an external system).
The flow of messages will never stop. New ones will keep coming in. But we
still want to bring
Adding as many worker instances as I need using builder.add works but one
caveat... this works:
object Balancer {
def apply[A, B, C](nrOfWorkers: Int, workerGraph: Graph[FlowShape[A, B],
Unit]): Graph[FlowShape[A, B], Unit] =
FlowGraph.partial() { implicit builder =
import
I'm trying to create a generic load balancer and the code looks like this:
import akka.stream.scaladsl.{ Merge, Broadcast, FlowGraph }
import akka.stream.{ FlowShape, Graph }
object Balancer {
def apply[A, B, C](nrOfWorkers: Int, workerGraph: Graph[FlowShape[A, B],
Unit]): Graph[FlowShape[A,
Hi,
I thought your original question was about getting hold of the ActorRef of your
ActorProducer so you could communicate with it, and the way to get the
materialized value of both the Source and the Sink is to use runWith on the
Flow, and pass in the Source and the Sink.
Here is some
Btw, have you looked at the actual cookbook sample?
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M4/scala/stream-cookbook.html#Balancing_jobs_to_a_fixed_pool_of_workers
On Tue, Mar 3, 2015 at 2:56 PM, Endre Varga endre.va...@typesafe.com
wrote:
Hi Luis,
It should not stuck
Hi,
I've looked at this, and it actually looks like a bug. I've opened a ticket
here https://github.com/akka/akka/issues/16982
Thanks for reporting this.
B/
On 28 February 2015 at 12:28:07, Igor Perevozchikov (iperevozchi...@gmail.com)
wrote:
Hello!
I'm very inspired of Reactive streams
Thank you very much, that explain my issue!
I am still not sure if WriteMessageSuccessful can arrive after the actor is
stopped. I can find no reference to it in the docs...
I will find out after running code with PoisonPill instead of stop(self()),
I guess!
Thank you!
Anders
mandag 2. mars
I suspect it is this issue https://github.com/akka/akka/issues/16979
The root cause is just not printed out :
Cause: java.lang.IllegalStateException: Processor actor terminated abruptly
--
Read the docs: http://akka.io/docs/
Check the FAQ:
Hi Steve,
I guess that wrapping it is one option. Another option would be to open a
ticket and backport/cherry-pick the fixes to 2.3.x for inclusion in future
minor releases. The changes look binary compatible.
B/
On 2 March 2015 at 02:02:29, Steve Ramage (s...@sjrx.net) wrote:
Like in this
Hi,
To be able to deserialize the objects you need to use Java Serialization, and
the bytestrings contain the data in the order that you wrote it. One bytestring
might contain both objects, or in the case of large objects, only part of an
object. Writing the object output stream straight into
I think I found a solution.
*isPersistent is false during receiving SnapshotOffer, but true during
receiving other persisted events from journal.*
So a solution is to add if isPersistent beside every onTransition case
that I don't want to get executed on the initial recovery from snapshots.
I
Hi Luis,
On Tue, Mar 3, 2015 at 3:08 PM, Luis Ángel Vicente Sánchez
langel.gro...@gmail.com wrote:
Thank you Endre! Yes, I have seen that example. In my use case, workers
are also partial graphs
That does not matter. All Flows are partial graphs with exactly the shape
that you use
Ok, then I will use the Flow API to simplify my code :) Thanks!
2015-03-03 14:31 GMT+00:00 Endre Varga endre.va...@typesafe.com:
Hi Luis,
On Tue, Mar 3, 2015 at 3:08 PM, Luis Ángel Vicente Sánchez
langel.gro...@gmail.com wrote:
Thank you Endre! Yes, I have seen that example. In my use
32 matches
Mail list logo