Re: [akka-user] akka 2.4.2 [java] stopping http-server and flow handling IncomingConnection

2016-04-03 Thread paweł kamiński
yeah, I ve just seen it was released. thanks guys - just in time ;]

On Friday, 1 April 2016 11:48:39 UTC+2, drewhk wrote:
>
> Or wait until 2.4.3 and use the KillSwitch feature, as it was mentioned 
> before:
>
> 1. unbind server
> 2. wait until grace period, then trigger KillSwitch.shutdown or abort
> 3. shut down ActorSystem
>
> -Endre
>
> On Fri, Apr 1, 2016 at 11:33 AM, Viktor Klang <viktor...@gmail.com 
> > wrote:
>
>> Call 'shutdown' on the ActorSystem? Or only the Materializer?
>>
>> On Fri, Apr 1, 2016 at 11:29 AM, paweł kamiński <kam...@gmail.com 
>> > wrote:
>>
>>> I want to clean state of the app, so I could 
>>>
>>> 1) terminate akka-system which would also terminate http-server and 
>>> start it all over again.
>>> OR 
>>> 2) I thought about shutting down just http-server in reaction to some 
>>> fault state of actors hierarchy. 
>>>
>>> Konrad's comment states that it is not enough just to unbind from port, 
>>> as active connection and their flows are operating, so I need to terminate 
>>> each flow after unbinding.
>>> At this point I have no other requirements.
>>>
>>> On Friday, 1 April 2016 11:05:57 UTC+2, √ wrote:
>>>>
>>>> When you say "cancel" do you mean "abrupt termination" or something 
>>>> with a code path attached to it?
>>>>
>>>> On Thu, Mar 31, 2016 at 11:53 PM, paweł kamiński <kam...@gmail.com> 
>>>> wrote:
>>>>
>>>>> dont take it personally, there is a lot to process and sometimes it is 
>>>>> easy to overlook the most important part, I guess I didn't get the 
>>>>> context 
>>>>> of the last sentence.
>>>>>
>>>>> thanks
>>>>>
>>>>> On Thursday, 31 March 2016 23:45:03 UTC+2, Konrad Malawski wrote:
>>>>>>
>>>>>> In 2.4.2 you can do the same in a GraphStage (read up on custom 
>>>>>> stream processing – cancel() and friends).
>>>>>>
>>>>>> -- 
>>>>>> Cheers,
>>>>>> Konrad 'ktoso’ Malawski
>>>>>> <http://akka.io>Akka <http://akka.io> @ Lightbend 
>>>>>> <http://typesafe.com>
>>>>>> <http://lightbend.com>
>>>>>>
>>>>>> On 31 March 2016 at 23:38:30, paweł kamiński (kam...@gmail.com) 
>>>>>> wrote:
>>>>>>
>>>>>> hej Konrad, can you please point me to any documentation describing 
>>>>>> how to manually cancel living flow? the official documentation assumes 
>>>>>> that 
>>>>>> flows are infinite or eventually source will be drained - so no problem. 
>>>>>>  
>>>>>> I cannot find any section on the topic of cancelling flows 
>>>>>> subscription.  
>>>>>>
>>>>>> On Wednesday, 30 March 2016 10:39:35 UTC+2, paweł kamiński wrote: 
>>>>>>>
>>>>>>> I thought bold is new black, coming back with a bang... 
>>>>>>>
>>>>>>> ok, your explanation makes kind of sense ;] all I expected was a 
>>>>>>> shutdown method with timeout. 
>>>>>>> and yeah a section with examples would be nice and I can write one 
>>>>>>> once I understand how it works ;]
>>>>>>>
>>>>>>> I understand that I need to hold each connection and then iterate 
>>>>>>> over them after unbind. 
>>>>>>> but I need a hint how to "cancels flows upstream subscription", I 
>>>>>>> can't see any useful method on connection or connection.flow().
>>>>>>>
>>>>>>> and one more thing - until now I thought that creating flow from 
>>>>>>> actor will make such flow child of the actor, so closing actor would 
>>>>>>> close 
>>>>>>> also flow's actors, obviously it is not and flow is keep on consuming 
>>>>>>> incoming messages even though actor is long gone.
>>>>>>>
>>>>>>> On Wednesday, 30 March 2016 00:32:24 UTC+2, Konrad Malawski wrote: 
>>>>>>>>
>>>>>>>> Helloł,
>>>>>>>>
>>>>>>>> I notice that after unbind I cannot create new connection (ie. from 
>>>>

Re: [akka-user] ANNOUNCE: Akka 2.4.3 released!

2016-04-03 Thread paweł kamiński
+1

On Friday, 1 April 2016 18:38:32 UTC+2, drewhk wrote:
>
> Yay!
>
> On Fri, Apr 1, 2016 at 6:37 PM, Konrad Malawski  > wrote:
>
>> ANNOUNCE: Akka 2.4.3 Released
>>
>> Dear hakkers,
>>
>> we—the Akka committers—are proud to announce the third patch release of 
>> Akka 2.4.
>>
>> This release focused mostly on hardening and polishing of existing 
>> features. We fixed a total of 58 flaky tests or bugs, continuing towards 
>> our stretch goal of reaching a clean slate in terms of known defects. 
>>
>> The key features of the 2.4.3 release are:
>>
>>
>>- 
>>
>>KillSwitch  streams 
>>operator, which allows external completion of streams,
>>- 
>>
>>the headerValueByType directive can now also handle 
>>ModeledCustomHeaders,
>>- 
>>
>>the “no elements passed since 1 minute” error log is now a debug 
>>message, as it is a fully expected thing to happen
>>- 
>>   
>>   please remember that idle connections in Akka HTTP will be 
>>   automatically closed after an idle-timeout. Timeouts in Akka HTTP 
>>    
>>   now have a documentation page.
>>   - 
>>
>>performance improvements in flatMapMerge 
>>
>>- 
>>
>>Performance improvements in materialized value computation
>>- 
>>
>>Fixed HTTPS support which broke due to changes behaviour changes in 
>>the JDK with respect to SNI 
>> handling
>>- 
>>
>>new search engine (powered by algolia) for our docs 
>> - which should make 
>>navigating the documentation much more pleasant.
>>
>>
>> Akka HTTP JavaDSL status update
>>
>> We are intensely working on introducing the new Routing Java DSL for Akka 
>> HTTP, which aims to be as close as possible to the Directives-style API 
>> that the Scala DSL provides. The majority of this work was contributed by 
>> Jan Ypma, and we continue to work together to push it over the finish line 
>> very soon – at which point we will release Akka 2.4.4.
>>
>> If you have been using the old Routing Java DSL this change will mean 
>> that these routes will have to be implemented using the new DSL when you 
>> upgrade. We are certain that the new DSL will be to your liking – we 
>> listened to lots of feedback from the community when deciding to make this 
>> change. 
>>
>> As such, the entire old routing dsl will be removed and replaced with the 
>> new DSL in the upcoming 2.4.4 release.
>>
>> Transparency of core Akka team plans
>>
>> For the sake of improving transparency and inclusiveness of our community 
>> we decided to publish the core team’s sprint planning notes on the 
>> akka-meta repository, such that interested parties can have a look at what 
>> we’re focusing on in the next weeks. 
>>
>> We hope that this experiment will work out well and improve awareness of 
>> where our efforts are spent, or where it would be a good time to contribute 
>> pull requests. 
>>
>> The plan which included shipping Akka 2.4.3 is available here 
>> .
>>
>> Binary Compatibility
>>
>> Akka 2.4.x is backwards binary compatible with previous 2.3.x versions 
>> (exceptions listed below). This means that the new JARs are a drop-in 
>> replacement for the old one (but not the other way around) as long as your 
>> build does not enable the inliner (Scala-only restriction). It should be 
>> noted that Scala 2.11.x is is not binary compatible with Scala 2.10.x, 
>> which means that Akka’s binary compatibility property only holds between 
>> versions that were built for a given Scala version—
>> akka-actor_2.11-2.4.0.jar is compatible with akka-actor_2.11-2.3.14.jar 
>> but not with akka-actor_2.10-2.3.14.jar.
>>
>> Binary compatibility is not maintained for the following:
>>
>>
>>- 
>>
>>testkits:
>>- 
>>   
>>   akka-testkit
>>   - 
>>   
>>   akka-multi-node-testkit
>>   - 
>>   
>>   akka-persistence-tck
>>   - 
>>   
>>   akka-stream-testkit
>>   - 
>>   
>>   akka-http-testkit
>>   - 
>>
>>experimental modules:
>>- 
>>   
>>   akka-persistence-query-experimental
>>   - 
>>   
>>   akka-distributed-data-experimental
>>   - 
>>   
>>   akka-typed-experimental
>>   - 
>>   
>>   akka-http-experimental  (please note that akka-http-core is stable)
>>   - 
>>   
>>   akka-http-spray-json-experimental
>>   - 
>>   
>>   akka-http-xml-experimental
>>   - 
>>   
>>   akka-http-jackson-experimental
>>   - 
>>
>>everything marked as INTERNAL API in the JavaDoc / ScalaDoc.
>>
>>
>> Migration Guide
>>
>> When migrating a code base to 

Re: [akka-user] akka 2.4.2 [java] stopping http-server and flow handling IncomingConnection

2016-04-01 Thread paweł kamiński
I want to clean state of the app, so I could 

1) terminate akka-system which would also terminate http-server and start 
it all over again.
OR 
2) I thought about shutting down just http-server in reaction to some fault 
state of actors hierarchy. 

Konrad's comment states that it is not enough just to unbind from port, as 
active connection and their flows are operating, so I need to terminate 
each flow after unbinding.
At this point I have no other requirements.

On Friday, 1 April 2016 11:05:57 UTC+2, √ wrote:
>
> When you say "cancel" do you mean "abrupt termination" or something with a 
> code path attached to it?
>
> On Thu, Mar 31, 2016 at 11:53 PM, paweł kamiński <kam...@gmail.com 
> > wrote:
>
>> dont take it personally, there is a lot to process and sometimes it is 
>> easy to overlook the most important part, I guess I didn't get the context 
>> of the last sentence.
>>
>> thanks
>>
>> On Thursday, 31 March 2016 23:45:03 UTC+2, Konrad Malawski wrote:
>>>
>>> In 2.4.2 you can do the same in a GraphStage (read up on custom stream 
>>> processing – cancel() and friends).
>>>
>>> -- 
>>> Cheers,
>>> Konrad 'ktoso’ Malawski
>>> <http://akka.io>Akka <http://akka.io> @ Lightbend <http://typesafe.com>
>>> <http://lightbend.com>
>>>
>>> On 31 March 2016 at 23:38:30, paweł kamiński (kam...@gmail.com) wrote:
>>>
>>> hej Konrad, can you please point me to any documentation describing how 
>>> to manually cancel living flow? the official documentation assumes that 
>>> flows are infinite or eventually source will be drained - so no problem.  
>>> I cannot find any section on the topic of cancelling flows subscription. 
>>>  
>>>
>>> On Wednesday, 30 March 2016 10:39:35 UTC+2, paweł kamiński wrote: 
>>>>
>>>> I thought bold is new black, coming back with a bang... 
>>>>
>>>> ok, your explanation makes kind of sense ;] all I expected was a 
>>>> shutdown method with timeout. 
>>>> and yeah a section with examples would be nice and I can write one once 
>>>> I understand how it works ;]
>>>>
>>>> I understand that I need to hold each connection and then iterate over 
>>>> them after unbind. 
>>>> but I need a hint how to "cancels flows upstream subscription", I can't 
>>>> see any useful method on connection or connection.flow().
>>>>
>>>> and one more thing - until now I thought that creating flow from actor 
>>>> will make such flow child of the actor, so closing actor would close also 
>>>> flow's actors, obviously it is not and flow is keep on consuming incoming 
>>>> messages even though actor is long gone.
>>>>
>>>> On Wednesday, 30 March 2016 00:32:24 UTC+2, Konrad Malawski wrote: 
>>>>>
>>>>> Helloł,
>>>>>
>>>>> I notice that after unbind I cannot create new connection (ie. from a 
>>>>> browser) but old, alive connection can still send requests. Am I missing 
>>>>> something?
>>>>>
>>>>> Please no bold to highlight question, it looks scary :-)
>>>>>
>>>>> That's exactly how it works, by design. I did notice however that we 
>>>>> do not have a shutting down section for the low level API, we should add 
>>>>> that - thanks -> https://github.com/akka/akka/issues/20177
>>>>>
>>>>> Would be awesome if you'd perhaps step up and help contributing a 
>>>>> beginning of that docs section :-)
>>>>>
>>>>> Just looking at 
>>>>> http://doc.akka.io/docs/akka/2.4.2/java/http/server-side/low-level-server-side-api.html#starting-and-stopping
>>>>>  
>>>>> I guess I do the right thing. 
>>>>> but I cannot understand a fragment from 
>>>>> http://doc.akka.io/docs/akka/2.4.2/java/http/server-side/low-level-server-side-api.html#Closing_a_connection,
>>>>>  
>>>>> "The HTTP connection will be closed when the handling Flow cancels 
>>>>> its upstream subscription". 
>>>>> Should I manually close connection after unbinding? that would be 
>>>>> strange, I just want to unbind and shutdown, closing all live connections.
>>>>>
>>>>> No, it's not strange for two reasons: a) it's "unbind" which means 
>>>>> "release that port, I won't be acceptin

Re: [akka-user] akka 2.4.2 [java] stopping http-server and flow handling IncomingConnection

2016-03-30 Thread paweł kamiński
I thought bold is new black, coming back with a bang...

ok, your explanation makes kind of sense ;] all I expected was a shutdown 
method with timeout. 
and yeah a section with examples would be nice and I can write one once I 
understand how it works ;]

I understand that I need to hold each connection and then iterate over them 
after unbind. 
but I need a hint how to "cancels flows upstream subscription", I can't see 
any useful method on connection or connection.flow().

and one more thing - until now I thought that creating flow from actor will 
make such flow child of the actor, so closing actor would close also flow's 
actors, obviously it is not and flow is keep on consuming incoming messages 
even though actor is long gone.

On Wednesday, 30 March 2016 00:32:24 UTC+2, Konrad Malawski wrote:
>
> Helloł,
>
> I notice that after unbind I cannot create new connection (ie. from a 
> browser) but old, alive connection can still send requests. Am I missing 
> something?
>
> Please no bold to highlight question, it looks scary :-)
>
> That's exactly how it works, by design. I did notice however that we do 
> not have a shutting down section for the low level API, we should add that 
> - thanks -> https://github.com/akka/akka/issues/20177
>
> Would be awesome if you'd perhaps step up and help contributing a 
> beginning of that docs section :-)
>
> Just looking at 
> http://doc.akka.io/docs/akka/2.4.2/java/http/server-side/low-level-server-side-api.html#starting-and-stopping
>  
> I guess I do the right thing. 
> but I cannot understand a fragment from 
> http://doc.akka.io/docs/akka/2.4.2/java/http/server-side/low-level-server-side-api.html#Closing_a_connection,
>  
> "The HTTP connection will be closed when the handling Flow cancels its 
> upstream subscription". 
> Should I manually close connection after unbinding? that would be strange, 
> I just want to unbind and shutdown, closing all live connections.
>
> No, it's not strange for two reasons: a) it's "unbind" which means 
> "release that port, I won't be accepting anything new on it", and it also 
> it is b) the behaviour one wants for  it's called graceful shutdown – serve 
> the people that already sent you requests, and stop accepting new ones, 
> then shut down.
>
> The "shutdown now and drop everything on the floor" can already be 
> achieved in a rather brutal way – killing the ActorSystem, however the 
> feature is on our radar however it's a new feature: shutdownNow on github 
> .
>
>
> For injecting "external termination" intro streams, we have KillSwitch 
> prepared which will likely ship with 2.4.3.
>
>
> -- 
> Cheers,
> Konrad 'ktoso’ Malawski
> Akka  @ Lightbend 
>
> 
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] akka 2.4.2 [java] stopping http-server and flow handling IncomingConnection

2016-03-29 Thread paweł kamiński
hi, I have a simple http service that binds to a given port and starts 
accepting IncomingConnection. 
the server is created by actor on its preStart callback and stopped on 
postStop.

everything works fine until I try to kill the actor and force http-server 
to stop.

public void start() {
ActorSystem system = materializer.system();
Source serverSource = 
Http
.get(system)
.bind(ConnectHttp.toHost("localhost", port), materializer);

Flow failureDetection = 
Flow
.of(IncomingConnection.class)
.watchTermination((_unused, termination) -> {
termination.whenComplete((done, cause) -> {
if (cause != null) {
log.info("Connection was closed.", cause);
}
});
return NotUsed.getInstance();
});

future = serverSource
.via(failureDetection)
.to(Sink.actorRef(parentRef, Kill.getInstance())) // force parent 
to restart and recreate http service
.run(materializer);
future
.whenCompleteAsync((binding, failure) -> {
if (failure != null) {
log.error("Could not initialize connection.", failure);
parentRef.tell(Kill.getInstance(), ActorRef.noSender());
} else {
log.info("Server listening on port {}.", port);
parentRef.tell(STARTED, ActorRef.noSender());
}
}, system.dispatcher());
}

public void stop() {
future
.whenComplete((binding, failure) -> {
log.info("Stopping Server on port {}.", port);
if (failure == null && binding != null) {
binding.unbind();
}
});
}


Here is part of log:

[DEBUG] [03/29/2016 23:42:58.191] 
[akka-system-akka.actor.default-dispatcher-6] 
[akka://akka-system/user/bSupervisor] received AutoReceiveMessage 
Envelope(PoisonPill,Actor[akka://akka-system/deadLetters])
2016-03-29 23:42:58.192  INFO 1965 --- 
[akka-system-akka.actor.default-dispatcher-3] c.c.v.d.b.actors.Parent : 
Stopping http server.
[DEBUG] [03/29/2016 23:42:58.192] 
[akka-system-akka.actor.default-dispatcher-6] 
[akka://akka-system/user/bSupervisor] stopping
2016-03-29 23:42:58.194  INFO 1965 --- [ForkJoinPool.commonPool-worker-1] 
c.c.v.d.b.rest.BServer: Stopping Server on port 8900.
[DEBUG] [03/29/2016 23:42:58.197] 
[akka-system-akka.actor.default-dispatcher-19] 
[akka://akka-system/user/bSupervisor] stopped
[DEBUG] [03/29/2016 23:42:58.199] 
[akka-system-akka.actor.default-dispatcher-2] 
[akka://akka-system/system/IO-TCP/selectors/$a/0] no longer watched by 
Actor[akka://akka-system/user/StreamSupervisor-0/$$a#50876208]
[DEBUG] [03/29/2016 23:42:58.200] 
[akka-system-akka.actor.default-dispatcher-8] 
[akka://akka-system/user/StreamSupervisor-0/flow-0-1-actorRefSink] received 
AutoReceiveMessage 
Envelope(Terminated(Actor[akka://akka-system/user/bSupervisor/parent#655292920]),Actor[akka://akka-system/user/bSupervisor/parent#655292920])
[DEBUG] [03/29/2016 23:42:58.200] 
[akka-system-akka.actor.default-dispatcher-2] 
[akka://akka-system/system/IO-TCP/selectors/$a/0] Unbinding endpoint 
/127.0.0.1:8900
[DEBUG] [03/29/2016 23:42:58.201] 
[akka-system-akka.actor.default-dispatcher-3] 
[akka://akka-system/user/bSupervisor/parent] stopped
[DEBUG] [03/29/2016 23:42:58.205] 
[akka-system-akka.actor.default-dispatcher-2] 
[akka://akka-system/system/IO-TCP/selectors/$a/0] Unbound endpoint 
/127.0.0.1:8900, stopping listener
[DEBUG] [03/29/2016 23:42:58.206] 
[akka-system-akka.actor.default-dispatcher-8] 
[akka://akka-system/user/StreamSupervisor-0/flow-0-1-actorRefSink] stopped
[DEBUG] [03/29/2016 23:42:58.208] 
[akka-system-akka.actor.default-dispatcher-2] 
[akka://akka-system/system/IO-TCP/selectors/$a/0] stopped
[DEBUG] [03/29/2016 23:42:58.208] 
[akka-system-akka.actor.default-dispatcher-6] 
[akka://akka-system/system/IO-TCP/selectors/$a] received AutoReceiveMessage 
Envelope(Terminated(Actor[akka://akka-system/system/IO-TCP/selectors/$a/0#1629651596]),Actor[akka://akka-system/system/IO-TCP/selectors/$a/0#1629651596])
[DEBUG] [03/29/2016 23:42:58.217] 
[akka-system-akka.actor.default-dispatcher-10] 
[akka://akka-system/user/StreamSupervisor-0/flow-0-0-unknown-operation] 
stopped 

*I notice that after unbind I cannot create new connection (ie. from a 
browser) but old, alive connection can still send requests. Am I missing 
something?*
Just looking 
at 
http://doc.akka.io/docs/akka/2.4.2/java/http/server-side/low-level-server-side-api.html#starting-and-stopping
 
I guess I do the right thing. 
but I cannot understand a fragment 
from 
http://doc.akka.io/docs/akka/2.4.2/java/http/server-side/low-level-server-side-api.html#Closing_a_connection,
 
"The HTTP connection will be closed 

Re: [akka-user] akka 2.4.2 java - How to create custom ContentType

2016-03-09 Thread paweł kamiński
I know that this is maybe boring, but can you point me in any direction how 
to correctly construct akka.http.scaladsl.model.MediaType.
applicationWithFixedCharset("my-own", ?, ?) in java? charset I can take 
from scala enums but the last parameters I have no idea. 

and can you tell what is the real difference between Binary and 
WithFixedCharset. is it only to have clear hierarchy or http-core is using 
it to control how http response is processed? is anywhere described in 
documentation?

On Wednesday, 9 March 2016 11:12:42 UTC+1, Konrad Malawski wrote:
>
> Thanks, will look into it soon!
>
> -- 
> Cheers,
> Konrad 'ktoso’ Malawski
> <http://akka.io>Akka <http://akka.io> @ Lightbend 
> <http://www.google.com/url?q=http%3A%2F%2Ftypesafe.com=D=1=AFQjCNFC6SplTJxAP7AExZl1lClfJ-tq6w>
> <http://lightbend.com>
>
> On 8 March 2016 at 22:42:30, paweł kamiński (kam...@gmail.com 
> ) wrote:
>
> @Konrad https://github.com/akka/akka/issues/19976 created, hope it will 
> help 
>
> I need a hint how to create empty scala.collection.Seq in java, I cannot 
> goole anything like that
>
> On Tuesday, 8 March 2016 17:50:31 UTC+1, Konrad Malawski wrote: 
>>
>> All Scala DSL model types extend the Java ones,
>> so you can co over the ScalaDSL to create them, and pass them into any 
>> place that expects JavaDSL types.
>>
>> Thanks in advance for the ticket
>>
>> -- 
>> Cheers,
>> Konrad 'ktoso’ Malawski
>> <http://akka.io>Akka <http://akka.io> @ Lightbend <http://typesafe.com>
>> <http://lightbend.com> 
>>
>> On 8 March 2016 at 17:48:44, paweł kamiński (kam...@gmail.com) wrote:
>>
>> OK, thanks, any hints how to work this around? 
>>
>> I ll create an issue.
>>
>> On Tuesday, 8 March 2016 17:37:48 UTC+1, Konrad Malawski wrote: 
>>>
>>> Seems we might be missing some factory methods on the MediaTypes trait, 
>>> the scaladsl has methods for returning more specific types, such as 
>>> `applicationWithCharset` etc.
>>>
>>> Would you please open a ticket about "feature parity of creating 
>>> MediaTypes for JavaDSL"?
>>> Thanks in advance!
>>>
>>> Please note that we're reworking the Java Routing DSL nowadays and 
>>> focusing on such missing bits is definitely something we'll do next up,
>>> please keep the feedback coming, thanks!
>>>
>>>
>>>
>>> -- 
>>> Cheers,
>>> Konrad 'ktoso’ Malawski
>>> <http://akka.io>Akka <http://akka.io> @ Lightbend <http://typesafe.com>
>>> <http://lightbend.com> 
>>>
>>> On 8 March 2016 at 17:14:17, paweł kamiński (kam...@gmail.com) wrote:
>>>
>>> I try to force my http server (using only http-core) to respond with 
>>> headers. 
>>>
>>> I get from request Accept header 
>>>
>>> String mime = request.getHeader(Accept.class)
>>> .map(HttpHeader::value)
>>> .orElse("application/json");
>>>
>>>
>>>  but then it is not clear to me how to create custom Content-Type 
>>> header. I cannot use RawHeader as it is ignored (as documentation 
>>> suggested). using HttpEntity.create is just pain in java
>>> to convert my mime back to ContentType I need to decide whether it is 
>>> binary or not. I tried
>>>
>>> ContentTypes.create(MediaTypes.custom()) // but it creates MediaType rather 
>>> than Binary, WithFixedCharset ..
>>>
>>>
>>>  then I tried 
>>>
>>>
>>>  ContentType.WithFixedCharset contentType = ContentTypes.create(
>>> 
>>> akka.http.scaladsl.model.MediaType.applicationWithFixedCharset("my-own", ?, 
>>> ?));
>>>
>>>
>>> but it suddenly is a scaladsl and I need to pass some scala collection / 
>>> charsets ... and there is no equivalent in javadsl (or maybe I am missing 
>>> it).
>>>
>>>
>>> what I am looking for is a way to convert Accept header into ContentType. I 
>>> dont know why this is so complicated. why ContentTypes.create isn't just 
>>> consume string???
>>>
>>> none http frameworks are making creating response so complicated. Or maybe 
>>> only javadsl is immature.
>>>
>>>
>>>  (I dont want to offend anyone, I am just tired)
>>>
>>>
>>>  thanks for any help
>>>
>>> --
>>> >>>>>>>>>> Read the docs: h

[akka-user] Re: [akka java 2.4.2] creating a flow sending elements of the stream to given actorRef and pushing results back to stream

2016-03-09 Thread paweł kamiński
beware of singletons in spring :] be lazy be prototype ;]
so updater ref for second connection was never created and then ask got 
invalid ref or something like that because eventually messages were sent to 
updater actor but from deadletter :)

thanks for help :)

On Wednesday, 9 March 2016 01:41:02 UTC+1, paweł kamiński wrote:
>
> thanks, for all help. 
>
> it is running for ever as I am testing concepts of updating a remote 
> client asynchronously, in real time Updater will get updates from other 
> actors and yes I will add supervision strategies.
> Im running this app from unit tests that creates spring context and also 
> http-serwer/actors along so maybe there is something funny going on. I ve 
> never integrated akka with existing spring-based aps but this is just a 
> proof of concept and I have almost identical app running both spring mvc 
> with netty :) I ll dump logs to a file this way it should be easier to 
> follow the flow.
>
> On Tuesday, 8 March 2016 23:33:38 UTC+1, Rafał Krzewski wrote:
>>
>> W dniu wtorek, 8 marca 2016 23:10:38 UTC+1 użytkownik paweł kamiński 
>> napisał:
>>>
>>> but this is impossible to change concurrently as I log it and then pass 
>>> to Pattern#ask. I just wonder why it is send from 
>>> *akka://akka-system/deadLetters* and why *ReceiveTimeout* is not sent 
>>> back to Updater...
>>>
>>> Oh, that's right! A message sent without specifying a sender (like when 
>> you are invoking ActorRef.tell from outside an actor) originates from 
>> deadLetters, but message from an ask should originate from /temp/$ 
>>  Something really weird is going on here.
>>
>> Regarding the ReceiveTimeouts, The log entries are 25 minutes later, and 
>> log format is different - I don't know what to make of that. Two different 
>> program runs with configuration change in between? That would explain why 
>> serial # of B1 actor is different. Otherwise I don't see why should it be 
>> restarted, as the Updater actor appears to run "forever", unless you are 
>> terminating it somehow from the outside (in the code not shown here). If 
>> the actor crashed with an exception in receive, you'd notice that in the 
>> log. Also you'd probably have to configure appropriate supervisionStrategy 
>> in updater's actor parent to restart it.
>>
>>  
>>
>>> the application I try to put together is a proof of concept and there is 
>>> no use to use scala at this point.
>>>
>>> Well, it's a matter of what you are comfortable using. It would be much 
>> shorter in Scala, and easier to read for some people, but harder for others 
>> :) Neither am I suggesting that rewriting it in Scala would fix the problem 
>> at hand.
>>
>> Cheers,
>> Rafał
>>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka java 2.4.2] creating a flow sending elements of the stream to given actorRef and pushing results back to stream

2016-03-08 Thread paweł kamiński
thanks, for all help. 

it is running for ever as I am testing concepts of updating a remote client 
asynchronously, in real time Updater will get updates from other actors and 
yes I will add supervision strategies.
Im running this app from unit tests that creates spring context and also 
http-serwer/actors along so maybe there is something funny going on. I ve 
never integrated akka with existing spring-based aps but this is just a 
proof of concept and I have almost identical app running both spring mvc 
with netty :) I ll dump logs to a file this way it should be easier to 
follow the flow.

On Tuesday, 8 March 2016 23:33:38 UTC+1, Rafał Krzewski wrote:
>
> W dniu wtorek, 8 marca 2016 23:10:38 UTC+1 użytkownik paweł kamiński 
> napisał:
>>
>> but this is impossible to change concurrently as I log it and then pass 
>> to Pattern#ask. I just wonder why it is send from 
>> *akka://akka-system/deadLetters* and why *ReceiveTimeout* is not sent 
>> back to Updater...
>>
>> Oh, that's right! A message sent without specifying a sender (like when 
> you are invoking ActorRef.tell from outside an actor) originates from 
> deadLetters, but message from an ask should originate from /temp/$ 
>  Something really weird is going on here.
>
> Regarding the ReceiveTimeouts, The log entries are 25 minutes later, and 
> log format is different - I don't know what to make of that. Two different 
> program runs with configuration change in between? That would explain why 
> serial # of B1 actor is different. Otherwise I don't see why should it be 
> restarted, as the Updater actor appears to run "forever", unless you are 
> terminating it somehow from the outside (in the code not shown here). If 
> the actor crashed with an exception in receive, you'd notice that in the 
> log. Also you'd probably have to configure appropriate supervisionStrategy 
> in updater's actor parent to restart it.
>
>  
>
>> the application I try to put together is a proof of concept and there is 
>> no use to use scala at this point.
>>
>> Well, it's a matter of what you are comfortable using. It would be much 
> shorter in Scala, and easier to read for some people, but harder for others 
> :) Neither am I suggesting that rewriting it in Scala would fix the problem 
> at hand.
>
> Cheers,
> Rafał
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka java 2.4.2] creating a flow sending elements of the stream to given actorRef and pushing results back to stream

2016-03-08 Thread paweł kamiński
but this is impossible to change concurrently as I log it and then pass to 
Pattern#ask. I just wonder why it is send from 
*akka://akka-system/deadLetters* and why *ReceiveTimeout* is not sent back 
to Updater...

the application I try to put together is a proof of concept and there is no 
use to use scala at this point.

On Tuesday, 8 March 2016 22:57:04 UTC+1, Rafał Krzewski wrote:
>
> First, let me say that even tough I program in Java for a living, I have 
> only ever used Akka with Scala. I'm actually surprised by the amount of 
> syntactic noise in Akka with Java variant!
>
> I doubt the "*ask* sends messages to only one updater even though I use 
> right actorRef" part:
>
> 2016-03-08 18:10:33.949  INFO 69545 --- 
> [akka-system-akka.actor.default-dispatcher-4] actors.RequestRouter : 
> Using updater for *B12* - Actor[akka://akka-system/user/*B12*
> #-1049418234].
> 2016-03-08 18:10:33.949  INFO 69545 --- 
> [akka-system-akka.actor.default-dispatcher-20] actors.Updater: 
> Actor[akka://akka-system/user/*B1*#857515287] : Received from 
> Actor[akka://akka-system/deadLetters] new update request *'B12*'
>
> Apparently the message that you intended to send to updater B12 ended up 
> in updater's B1 mailbox! It looks like the value of updater variable 
> changed between log.info and ask invocation. It looks like you where 
> closing over a  mutable variable that gets modified concurrently by 
> different threads, but since updater is a local variable in the closure 
> passed to mapAsync, this really shouldn't been happening... Weird.
>
> I guess that's all I can say without digging in with a debugger :) Good 
> luck!
> Rafał
>
> W dniu wtorek, 8 marca 2016 18:41:27 UTC+1 użytkownik paweł kamiński 
> napisał:
>>
>> one more thing that is not working for me right now
>>
>> this is a code handling incoming connection and requests
>>
>> receive(ReceiveBuilder
>> .match(IncomingConnection.class, connection -> {
>> log.info("Established new connection from {}.", 
>> connection.remoteAddress());
>> Flow<HttpRequest, HttpResponse, NotUsed> handler = 
>> handleUpdateRequest(materializer);
>> connection.handleWith(handler, materializer);
>> })
>>
>>
>> and the handleUpdateRequest method
>>
>>
>> private Flow<HttpRequest, HttpResponse, NotUsed> 
>> handleUpdateRequest(ActorMaterializer materializer) {
>> return Flow
>> .of(HttpRequest.class)
>> .mapAsync(1, request -> {
>> log.debug("Handling request. {}", request.getUri());
>>
>> String mime = request.getHeader(Accept.class)
>> .map(HttpHeader::value)
>> .orElse("application/json");
>>
>> Query query = uri.query();
>> Optional idOp = query.get("id");
>> if (!idOp.isPresent()) {
>> return createResponse(StatusCodes.BAD_REQUEST);
>> }
>>
>> String id = idOp.get();
>>
>> ActorRef updater = getUpdater(id, materializer.system());
>>
>> log.info("Using updater for {} - {}.", id, updater);
>>
>>
>>  return asResponseFromScalaFuture(ask(updater, bidderId, new Timeout(
>> RESPONSE_WAIT_DURATION)));
>>
>> });
>> }
>>
>>
>> public Updater() {
>> HashSet refs = Sets.newHashSet();
>> receive(ReceiveBuilder
>> .match(String.class, request -> {
>> log.info("{} : Received from {} new update request {}.", 
>> self(), sender(), request);
>>context().setReceiveTimeout(Duration.apply(3, 
>> TimeUnit.SECONDS));
>> refs.add(sender());
>> })
>> .match(ReceiveTimeout.class, timeout -> {
>> log.info("{} : Sending response.", self());
>> refs.stream()
>> .forEach(ref -> {
>> log.info("{} : Sending response to {}.", self(), 
>> ref);
>> ref.tell(createFakeResponse(), self());
>> });
>> refs.clear();
>> })
>> .build());
>> }
>>
>>
>> everything works when there is only one connection. request is sent to

[akka-user] Re: [akka java 2.4.2] creating a flow sending elements of the stream to given actorRef and pushing results back to stream

2016-03-08 Thread paweł kamiński
one more thing that is not working for me right now

this is a code handling incoming connection and requests

receive(ReceiveBuilder
.match(IncomingConnection.class, connection -> {
log.info("Established new connection from {}.", 
connection.remoteAddress());
Flow<HttpRequest, HttpResponse, NotUsed> handler = 
handleUpdateRequest(materializer);
connection.handleWith(handler, materializer);
})


and the handleUpdateRequest method


private Flow<HttpRequest, HttpResponse, NotUsed> 
handleUpdateRequest(ActorMaterializer materializer) {
return Flow
.of(HttpRequest.class)
.mapAsync(1, request -> {
log.debug("Handling request. {}", request.getUri());

String mime = request.getHeader(Accept.class)
.map(HttpHeader::value)
.orElse("application/json");

Query query = uri.query();
Optional idOp = query.get("id");
if (!idOp.isPresent()) {
return createResponse(StatusCodes.BAD_REQUEST);
}

String id = idOp.get();

ActorRef updater = getUpdater(id, materializer.system());

log.info("Using updater for {} - {}.", id, updater);


 return asResponseFromScalaFuture(ask(updater, bidderId, new Timeout(
RESPONSE_WAIT_DURATION)));

});
}


public Updater() {
HashSet refs = Sets.newHashSet();
receive(ReceiveBuilder
.match(String.class, request -> {
log.info("{} : Received from {} new update request {}.", 
self(), sender(), request);
   context().setReceiveTimeout(Duration.apply(3, TimeUnit.SECONDS));
refs.add(sender());
})
.match(ReceiveTimeout.class, timeout -> {
log.info("{} : Sending response.", self());
refs.stream()
.forEach(ref -> {
log.info("{} : Sending response to {}.", self(), 
ref);
ref.tell(createFakeResponse(), self());
});
refs.clear();
})
.build());
}


everything works when there is only one connection. request is sent to 
updater (actorRef) and response is converted from scala future back to java 
future and httpResponse is passed back to flow.

but when there is more than one connection I can see in logs that only one 
actor (updater) is responding, while investigating I noticed that *ask* 
sends messages to only one updater even though I use right actorRef and 
there is unique updaters for each connection.

2016-03-08 18:10:33.949 DEBUG 69545 --- 
[akka-system-akka.actor.default-dispatcher-9] actors.RequestRouter : 
Handling request. http://localhost/budgets?id=B1
2016-03-08 18:10:33.949  INFO 69545 --- 
[akka-system-akka.actor.default-dispatcher-9] actors.RequestRouter : 
Using updater for B1 - Actor[akka://akka-system/user/B1#857515287].
2016-03-08 18:10:33.949 DEBUG 69545 --- 
[akka-system-akka.actor.default-dispatcher-4] actors.RequestRouter : 
Handling request. http://localhost/budgets?id=B12
2016-03-08 18:10:33.949  INFO 69545 --- 
[akka-system-akka.actor.default-dispatcher-14] actors.Updater: 
Actor[akka://akka-system/user/B1#857515287] : Received from 
Actor[akka://akka-system/temp/$r0] new update request 'B1'
2016-03-08 18:10:33.949  INFO 69545 --- 
[akka-system-akka.actor.default-dispatcher-4] actors.RequestRouter : 
Using updater for B12 - Actor[akka://akka-system/user/B12#-1049418234].
2016-03-08 18:10:33.949  INFO 69545 --- 
[akka-system-akka.actor.default-dispatcher-20] actors.Updater: 
Actor[akka://akka-system/user/B1#857515287] : Received from 
Actor[akka://akka-system/*deadLetters*] new update request 'B12'

[INFO] [03/08/2016 18:34:28.709] 
[akka-system-akka.actor.default-dispatcher-5] [akka://akka-system/user/B1] 
Message [akka.actor.ReceiveTimeout$] from 
Actor[akka://akka-system/deadLetters] to 
Actor[akka://akka-system/user/B1#385306581] was not delivered. [2] dead 
letters encountered. 
[INFO] [03/08/2016 18:34:28.709] 
[akka-system-akka.actor.default-dispatcher-5] [akka://akka-system/user/B1] 
Message [akka.actor.ReceiveTimeout$] from 
Actor[akka://akka-system/deadLetters] to 
Actor[akka://akka-system/user/B1#385306581] was not delivered. [3] dead 
letters encountered. 

I wonder why I get message from *deadLetters *and why it is sent to* B1 *actor. 
the result is that computed messages for B12 will be never sent back. also 
requested timeouts messages are not delivered :/ I ve messed this up :)

On Tuesday, 8 March 2016 14:20:16 UTC+1, paweł kamiński wrote: 

> ok, I was playing around with this yest. there is no way to do th

Re: [akka-user] akka 2.4.2 java - How to create custom ContentType

2016-03-08 Thread paweł kamiński
OK, thanks, any hints how to work this around?

I ll create an issue.

On Tuesday, 8 March 2016 17:37:48 UTC+1, Konrad Malawski wrote:
>
> Seems we might be missing some factory methods on the MediaTypes trait, 
> the scaladsl has methods for returning more specific types, such as 
> `applicationWithCharset` etc.
>
> Would you please open a ticket about "feature parity of creating 
> MediaTypes for JavaDSL"?
> Thanks in advance!
>
> Please note that we're reworking the Java Routing DSL nowadays and 
> focusing on such missing bits is definitely something we'll do next up,
> please keep the feedback coming, thanks!
>
>
>
> -- 
> Cheers,
> Konrad 'ktoso’ Malawski
> <http://akka.io>Akka <http://akka.io> @ Lightbend <http://typesafe.com>
> <http://lightbend.com>
>
> On 8 March 2016 at 17:14:17, paweł kamiński (kam...@gmail.com 
> ) wrote:
>
> I try to force my http server (using only http-core) to respond with 
> headers. 
>
> I get from request Accept header 
>
> String mime = request.getHeader(Accept.class)
> .map(HttpHeader::value)
> .orElse("application/json");
>
>
>  but then it is not clear to me how to create custom Content-Type header. 
> I cannot use RawHeader as it is ignored (as documentation suggested). using 
> HttpEntity.create is just pain in java
> to convert my mime back to ContentType I need to decide whether it is 
> binary or not. I tried
>
> ContentTypes.create(MediaTypes.custom()) // but it creates MediaType rather 
> than Binary, WithFixedCharset ..
>
>
>  then I tried 
>
>
>  ContentType.WithFixedCharset contentType = ContentTypes.create(
> 
> akka.http.scaladsl.model.MediaType.applicationWithFixedCharset("my-own", ?, 
> ?));
>
>
> but it suddenly is a scaladsl and I need to pass some scala collection / 
> charsets ... and there is no equivalent in javadsl (or maybe I am missing it).
>
>
> what I am looking for is a way to convert Accept header into ContentType. I 
> dont know why this is so complicated. why ContentTypes.create isn't just 
> consume string???
>
> none http frameworks are making creating response so complicated. Or maybe 
> only javadsl is immature.
>
>
>  (I dont want to offend anyone, I am just tired)
>
>
>  thanks for any help
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] akka 2.4.2 java - How to create custom ContentType

2016-03-08 Thread paweł kamiński
I try to force my http server (using only http-core) to respond with 
headers.

I get from request Accept header 

String mime = request.getHeader(Accept.class)
.map(HttpHeader::value)
.orElse("application/json");


but then it is not clear to me how to create custom Content-Type header. I 
cannot use RawHeader as it is ignored (as documentation suggested). using 
HttpEntity.create is just pain in java
to convert my mime back to ContentType I need to decide whether it is 
binary or not. I tried

ContentTypes.create(MediaTypes.custom()) // but it creates MediaType rather 
than Binary, WithFixedCharset ..


then I tried 


ContentType.WithFixedCharset contentType = ContentTypes.create(

akka.http.scaladsl.model.MediaType.applicationWithFixedCharset("my-own", ?, ?));


but it suddenly is a scaladsl and I need to pass some scala collection / 
charsets ... and there is no equivalent in javadsl (or maybe I am missing 
it).


what I am looking for is a way to convert Accept header into ContentType. I 
dont know why this is so complicated. why ContentTypes.create isn't just 
consume string???

none http frameworks are making creating response so complicated. Or maybe only 
javadsl is immature.


(I dont want to offend anyone, I am just tired)


thanks for any help

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka java 2.4.2] creating a flow sending elements of the stream to given actorRef and pushing results back to stream

2016-03-08 Thread paweł kamiński
ok, I was playing around with this yest. there is no way to do that. 

only mapAsync is a way to create a flow that pushes elements of the stream 
to 3rd party components such as actors. 
this is a bit confusing as there is a ActorPublisher that would be good fit 
here if only Flow had a construct like Source.actorPublisher() / Sink.
actorRef(). this way we could pass Props of publisher which would transform 
incoming messages into something else.

On Monday, 7 March 2016 11:28:03 UTC+1, paweł kamiński wrote:
>
> yep, i'm now thinking about duplex flow so I can push incoming messages to 
> actorRef sink and then such actor would be a source. 
>
> I must say that constructing right flow is not an easy task if you do it 
> occasionally! :)
> maybe I ll share my findings once I get it working
>
> On Monday, 7 March 2016 10:43:26 UTC+1, Rafał Krzewski wrote:
>>
>> W dniu poniedziałek, 7 marca 2016 10:08:23 UTC+1 użytkownik paweł 
>> kamiński napisał:
>>
>>> thanks for response.
>>>
>>> well ask pattern is a way to go but I thought I could avoid it and use 
>>> only flow's connection.
>>>
>>> Akka gives you a rich choice of communication primitives and it's OK to 
>> mix and match them in a way that solves the problem at hand in most 
>> convenient way :)
>>  
>> Happy hacking!
>> Rafał
>>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka java 2.4.2] creating a flow sending elements of the stream to given actorRef and pushing results back to stream

2016-03-07 Thread paweł kamiński
yep, i'm now thinking about duplex flow so I can push incoming messages to 
actorRef sink and then such actor would be a source. 

I must say that constructing right flow is not an easy task if you do it 
occasionally! :)
maybe I ll share my findings once I get it working

On Monday, 7 March 2016 10:43:26 UTC+1, Rafał Krzewski wrote:
>
> W dniu poniedziałek, 7 marca 2016 10:08:23 UTC+1 użytkownik paweł kamiński 
> napisał:
>
>> thanks for response.
>>
>> well ask pattern is a way to go but I thought I could avoid it and use 
>> only flow's connection.
>>
>> Akka gives you a rich choice of communication primitives and it's OK to 
> mix and match them in a way that solves the problem at hand in most 
> convenient way :)
>  
> Happy hacking!
> Rafał
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka java 2.4.2] creating a flow sending elements of the stream to given actorRef and pushing results back to stream

2016-03-07 Thread paweł kamiński
thanks for response.

well ask pattern is a way to go but I thought I could avoid it and use only 
flow's connection.


On Monday, 7 March 2016 01:02:08 UTC+1, Rafał Krzewski wrote:
>
> Hej Paweł,
>
> if I understand your use case correctly, mapAsync with an ask [1] inside 
> should work just fine. You might want to introduce an coordinator actor 
> that would deal with worker management, ie the HTTP flow sends message 
> (using ask pattern) to coordinator, coordinator performs worker lookup / 
> creation and forwards the message to worker, worker replies directly to the 
> temporary actor handing the ask, thus completing the Future. In the next 
> stage, just marshall the data to a HttpEntity and your're done.
>
> Cheers,
> Rafał
>
> [1] 
> http://doc.akka.io/japi/akka/2.4.2/akka/pattern/AskSupport.html#ask-akka.actor.ActorRef-java.lang.Object-akka.util.Timeout-
>
>
> W dniu niedziela, 6 marca 2016 23:25:07 UTC+1 użytkownik paweł kamiński 
> napisał:
>>
>> hi,
>> I have a simple HTTP service that accepts connections and keeps them 
>> alive. 
>> Once client sends something I look-up actorRef (WORKER) based on 
>> path/query of request and I would like to wait for response from such actor 
>> so I can respond back to client. 
>>
>> this is server to server communication so there should be one connection 
>> per machine but there will be situation where same path/query is sent by 
>> more then one connection.
>> (in other words one worker should be able to update more than one stream 
>> created by accepted connection - if this is even possible).
>> A worker can only produce result if other actor sent it data in first 
>> place, worker cannot produce results by itself. I want a worker to 
>> represent client in akka system. 
>>
>> The problem I am facing is that I cannot figure out how to create such 
>> flow that gets HttpRequest and produces HttpResponse and in the middle 
>> sends incoming request to actor and waits for response.
>> so far I came with such code
>>
>> public void start() {
>> Source<IncomingConnection, CompletionStage> serverSource 
>> = Http
>> .get(system)
>> .bind(ConnectHttp.toHost("localhost", port), materializer);
>>
>> Flow<IncomingConnection, IncomingConnection, NotUsed> failureDetection = 
>> Flow
>> .of(IncomingConnection.class)
>> .watchTermination((_unused, termination) -> {
>> termination.whenComplete((done, cause) -> {
>> if (cause != null) {
>> log.error("Connection was closed.", cause);
>> }
>> });
>> return NotUsed.getInstance();
>> });
>>
>> serverSource
>> .via(failureDetection)
>>
>> // <--- send each new connection to actorRef
>> .to(Sink.actorRef(connectionRouter, closeConnectionMsg)) 
>> .run(materializer)
>> .whenCompleteAsync((binding, failure) -> {
>> if (failure != null) {
>> log.error("Could not initialize connection.", failure);
>> }
>> }, system.dispatcher());
>> }
>>
>>
>> // ConnectionRouter receive definition
>>
>> receive(ReceiveBuilder
>> .match(IncomingConnection.class, connection -> {
>> Flow<HttpRequest, HttpResponse, NotUsed> updateRequestFlow = Flow
>>
>> .of(HttpRequest.class)
>>
>> .map(request -> {
>> String mime = request.getHeader(Accept.class)
>>
>> .map(HttpHeader::value)
>>
>> .orElse("application/json");
>>
>>
>> if (!isAcceptable(mime)) {
>>
>> return HttpResponse.create()
>>
>> .withStatus(StatusCodes.NOT_ACCEPTABLE)
>>
>> .addHeader(RawHeader.create("Connection", 
>> "close"))
>>
>> .addHeader(RawHeader.create("Content-Length", 
>> "0"));
>>
>> }
>>
>>
>> Query query = request.getUri().query();
>>
>> Optional idOp = query.get("id");
>>
>> if (!idOp.isPresent()) {
>>
>> return HttpResponse.create()
&

[akka-user] [akka java 2.4.2] creating a flow sending elements of the stream to given actorRef and pushing results back to stream

2016-03-06 Thread paweł kamiński
hi,
I have a simple HTTP service that accepts connections and keeps them alive. 
Once client sends something I look-up actorRef (WORKER) based on path/query 
of request and I would like to wait for response from such actor so I can 
respond back to client. 

this is server to server communication so there should be one connection 
per machine but there will be situation where same path/query is sent by 
more then one connection.
(in other words one worker should be able to update more than one stream 
created by accepted connection - if this is even possible).
A worker can only produce result if other actor sent it data in first 
place, worker cannot produce results by itself. I want a worker to 
represent client in akka system. 

The problem I am facing is that I cannot figure out how to create such flow 
that gets HttpRequest and produces HttpResponse and in the middle sends 
incoming request to actor and waits for response.
so far I came with such code

public void start() {
Source serverSource = 
Http
.get(system)
.bind(ConnectHttp.toHost("localhost", port), materializer);

Flow failureDetection = 
Flow
.of(IncomingConnection.class)
.watchTermination((_unused, termination) -> {
termination.whenComplete((done, cause) -> {
if (cause != null) {
log.error("Connection was closed.", cause);
}
});
return NotUsed.getInstance();
});

serverSource
.via(failureDetection)

// <--- send each new connection to actorRef
.to(Sink.actorRef(connectionRouter, closeConnectionMsg)) 
.run(materializer)
.whenCompleteAsync((binding, failure) -> {
if (failure != null) {
log.error("Could not initialize connection.", failure);
}
}, system.dispatcher());
}


// ConnectionRouter receive definition

receive(ReceiveBuilder
.match(IncomingConnection.class, connection -> {
Flow updateRequestFlow = Flow

.of(HttpRequest.class)

.map(request -> {
String mime = request.getHeader(Accept.class)

.map(HttpHeader::value)

.orElse("application/json");


if (!isAcceptable(mime)) {

return HttpResponse.create()

.withStatus(StatusCodes.NOT_ACCEPTABLE)

.addHeader(RawHeader.create("Connection", "close"))

.addHeader(RawHeader.create("Content-Length", "0"));

}


Query query = request.getUri().query();

Optional idOp = query.get("id");

if (!idOp.isPresent()) {

return HttpResponse.create()

.withStatus(StatusCodes.BAD_REQUEST)

.addHeader(RawHeader.create("Connection", "close"))

.addHeader(RawHeader.create("Content-Length", "0"));

}


String id = idOp.get();

// <--- retrieve or create new worker based on ID (it is 
limited set of ids)

ActorRef worker = actorsMap.get(id);


// NOW worker.tell(READY_TO_GET_DATA_MSG) should eventually 
create some result that should be mapped to response


byte[] bytes = toBytes(mime, RESULT_PRODUCED_BY_WORKER);

String length = Integer.toString(bytes.length);

return HttpResponse.create()

.withStatus(StatusCodes.OK)

.withEntity(HttpEntities.create(bytes))

.addHeader(RawHeader.create("Connection", "keep-alive"))

.addHeader(RawHeader.create("Content-Length", length))

.addHeader(RawHeader.create("Content-Type", mime));

});

connection.handleWith(updateRequestFlow, materializer);
})
.build());


is this even possible with current akka-http / streams ? I have been looking 
into http://doc.akka.io/docs/akka/2.4.2/java/stream/stream-integrations.html 
but mapAsync is rather not my use-case, ActorPublisher maybe would help but I 
cannot make it fit into described flow.

thanks for any ideas.



-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

[akka-user] Re: Akka streams 1.0 with java sdl - exposing BidiFlow as Flow

2015-09-18 Thread paweł kamiński
I guess the answer can be found on the same page I referred.


   1. final Flow<Message, Message, BoxedUnit> flow = stack.atop(stack.
   reversed()).join(pingpong);

I need to close BidiFlow from one end to get Flow of the other end

On Wednesday, 16 September 2015 23:09:46 UTC+2, paweł kamiński wrote:
>
> hi,
>
> I folowed 
> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-graphs.html#Bidirectional_Flows
>  
> and its section describing BidiFlow but my brain has just fried when I 
> tried to figure out how to expose left side of such BidiFlow as Flow. 
>
> in other words how to hide the fact that we are connecting to BidiFlow.
>
> lets stick with example from the page above
>
>
>1. BidiFlow<Message, ByteString, ByteString, Message, BoxedUnit> 
>codecVerbose = BidiFlow.factory().create(b -> {
>2. final FlowShape<Message, ByteString> top = b.graph(Flow. 
>empty().map(BidiFlowDocTest::toBytes));
>3. final FlowShape<ByteString, Message> bottom = b.graph(Flow.<
>ByteString> empty().map(BidiFlowDocTest::fromBytes));
>4. return new BidiShape<>(top, bottom);
>5. });
>6. +--+
>7. Message ~>| |~> ByteString
>8. | bidi |
>9. Message <~| |<~ ByteString
>10. +--+
>
> And I would like to expose to the world a flow like
>
>
>1. Flow<Message, Message, BoxedUnit> codecVerboseAsFlow = Flow.factory
>().create(b -> {
>2. b.graph(...);
>3. return new Pair<>(??.inlet(), ??.out());
>4. });
>
> again thanks for help
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka streams 1.0 with java sdl - exposing BidiFlow as Flow

2015-09-18 Thread paweł kamiński
exactly, I want to pass my BidiFlow as Flow to some component that knows 
how to connect it. so it doesn't really matter if the flow is simple 
mapping/codec or some composite structure - which is exactly what you are 
describing and I am just passing something more sophisticated.

I just cannot find a simple way (apart from my prev comment - where I 
simply close BidiFlow from one end and I get a flow as result) to expose 
BidiFlow's ends as Flow which would make life more simple.

On Friday, 18 September 2015 12:09:39 UTC+2, drewhk wrote:
>
> Hi Pawel,
>
> The alternative is that it is not the BidiFlow that is provided to the 
> part which has the Flow, but the Flow is provided to the part that has the 
> BidiFlow. That way the part that has the Flow does not need to know how it 
> will be connected in a potentially larger thing.
>
> -Endre
>
> On Fri, Sep 18, 2015 at 12:00 PM, paweł kamiński <kam...@gmail.com 
> > wrote:
>
>> I guess the answer can be found on the same page I referred.
>>
>>
>>1. final Flow<Message, Message, BoxedUnit> flow = stack.atop(stack.
>>reversed()).join(pingpong);
>>
>> I need to close BidiFlow from one end to get Flow of the other end
>>
>> On Wednesday, 16 September 2015 23:09:46 UTC+2, paweł kamiński wrote:
>>>
>>> hi,
>>>
>>> I folowed 
>>> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-graphs.html#Bidirectional_Flows
>>>  
>>> and its section describing BidiFlow but my brain has just fried when I 
>>> tried to figure out how to expose left side of such BidiFlow as Flow. 
>>>
>>> in other words how to hide the fact that we are connecting to BidiFlow.
>>>
>>> lets stick with example from the page above
>>>
>>>
>>>1. BidiFlow<Message, ByteString, ByteString, Message, BoxedUnit> 
>>>codecVerbose = BidiFlow.factory().create(b -> {
>>>2. final FlowShape<Message, ByteString> top = b.graph(Flow. 
>>>empty().map(BidiFlowDocTest::toBytes));
>>>3. final FlowShape<ByteString, Message> bottom = b.graph(Flow.<
>>>ByteString> empty().map(BidiFlowDocTest::fromBytes));
>>>4. return new BidiShape<>(top, bottom);
>>>5. });
>>>6. +--+
>>>7. Message ~>| |~> ByteString
>>>8. | bidi |
>>>9. Message <~| |<~ ByteString
>>>10. +--+
>>>
>>> And I would like to expose to the world a flow like
>>>
>>>
>>>1. Flow<Message, Message, BoxedUnit> codecVerboseAsFlow = Flow.
>>>factory().create(b -> {
>>>2. b.graph(...);
>>>3. return new Pair<>(??.inlet(), ??.out());
>>>4. });
>>>
>>> again thanks for help
>>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka streams 1.0 with java sdl - exposing BidiFlow as Flow

2015-09-18 Thread paweł kamiński
thanks for detailed answer, and yes now I understand it, it just was't 
explained in details in docs and I got impression it should be doable and 
useful :)

On Friday, 18 September 2015 13:07:40 UTC+2, drewhk wrote:
>
> Hi Pawel,
>
> On Fri, Sep 18, 2015 at 12:19 PM, paweł kamiński <kam...@gmail.com 
> > wrote:
>
>> exactly, I want to pass my BidiFlow as Flow to some component that knows 
>> how to connect it. so it doesn't really matter if the flow is simple 
>> mapping/codec or some composite structure - which is exactly what you are 
>> describing and I am just passing something more sophisticated.
>>
>> I just cannot find a simple way (apart from my prev comment - where I 
>> simply close BidiFlow from one end and I get a flow as result) to expose 
>> BidiFlow's ends as Flow which would make life more simple.
>>
>
> You cannot do that unfortunately, since Akka needs to enforce that all 
> ports are properly connected. If you would pass it around a BidiFlow as a 
> Flow, then you would be able to materialize it: 
>
> src.via(bidiFlowAsFlow).runWith(sink)
>
> However, it would not work, since two ports of the BidiFlow would be 
> unconnected. 
>
> You can materialize the BidiFlow with Publishers/Subscribers on all ports 
> though, then re-wrap them in separate Flows. Remember though that in this 
> case those Flows are not reusable.
>
> Example (have not compiled it):
>
>  val publisherSubscriberFlow = Flow.wrap(Sink.publisher, 
> Source.subscriber(Keep.both)
>  val closedBidi = 
> publisherSubscriberFlow.join(bidi).join(publisherSubscriberFlow)
>  val ((topPub, topSub), (bottomPub, bottomSub)) = closedBidi.run()
>
>  val flow1 = Flow.wrap(Sink(topSub), Source(topPub))
>  val flow2 = Flow.wrap(Sink(bottomSub), Source(bottomPub))
>
> Remember though that those flows are no longer reusable since they wrap an 
> already running entity. I personally recommend against this route though, 
> but if there is really no other way, it should work.
>
> -Endre
>  
>
>>
>>
>> On Friday, 18 September 2015 12:09:39 UTC+2, drewhk wrote:
>>>
>>> Hi Pawel,
>>>
>>> The alternative is that it is not the BidiFlow that is provided to the 
>>> part which has the Flow, but the Flow is provided to the part that has the 
>>> BidiFlow. That way the part that has the Flow does not need to know how it 
>>> will be connected in a potentially larger thing.
>>>
>>> -Endre
>>>
>>> On Fri, Sep 18, 2015 at 12:00 PM, paweł kamiński <kam...@gmail.com> 
>>> wrote:
>>>
>>>> I guess the answer can be found on the same page I referred.
>>>>
>>>>
>>>>1. final Flow<Message, Message, BoxedUnit> flow = stack.atop(stack.
>>>>reversed()).join(pingpong);
>>>>
>>>> I need to close BidiFlow from one end to get Flow of the other end
>>>>
>>>> On Wednesday, 16 September 2015 23:09:46 UTC+2, paweł kamiński wrote:
>>>>>
>>>>> hi,
>>>>>
>>>>> I folowed 
>>>>> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-graphs.html#Bidirectional_Flows
>>>>>  
>>>>> and its section describing BidiFlow but my brain has just fried when I 
>>>>> tried to figure out how to expose left side of such BidiFlow as Flow. 
>>>>>
>>>>> in other words how to hide the fact that we are connecting to BidiFlow.
>>>>>
>>>>> lets stick with example from the page above
>>>>>
>>>>>
>>>>>1. BidiFlow<Message, ByteString, ByteString, Message, BoxedUnit> 
>>>>>codecVerbose = BidiFlow.factory().create(b -> {
>>>>>2. final FlowShape<Message, ByteString> top = b.graph(Flow.>>>>> empty().map(BidiFlowDocTest::toBytes));
>>>>>3. final FlowShape<ByteString, Message> bottom = b.graph(Flow.<
>>>>>ByteString> empty().map(BidiFlowDocTest::fromBytes));
>>>>>4. return new BidiShape<>(top, bottom);
>>>>>5. });
>>>>>6. +--+
>>>>>7. Message ~>| |~> ByteString
>>>>>8. | bidi |
>>>>>9. Message <~| |<~ ByteString
>>>>>10. +--+
>>>>>
>>>>> And I would like to expose to the world a flow like
>>>>>
>>>>>
>>>>>1. Flow<Message, Message, BoxedUnit> codecVerboseAsFlow = Flo

[akka-user] Akka streams 1.0 with java sdl - exposing BidiFlow as Flow

2015-09-16 Thread paweł kamiński
hi,

I 
folowed 
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-graphs.html#Bidirectional_Flows
 
and its section describing BidiFlow but my brain has just fried when I 
tried to figure out how to expose left side of such BidiFlow as Flow. 

in other words how to hide the fact that we are connecting to BidiFlow.

lets stick with example from the page above


   1. BidiFlow 
   codecVerbose = BidiFlow.factory().create(b -> {
   2. final FlowShape top = b.graph(Flow. 
   empty().map(BidiFlowDocTest::toBytes));
   3. final FlowShape bottom = b.graph(Flow. empty().map(BidiFlowDocTest::fromBytes));
   4. return new BidiShape<>(top, bottom);
   5. });
   6. +--+
   7. Message ~>| |~> ByteString
   8. | bidi |
   9. Message <~| |<~ ByteString
   10. +--+

And I would like to expose to the world a flow like


   1. Flow codecVerboseAsFlow = Flow.factory().
   create(b -> {
   2. b.graph(...);
   3. return new Pair<>(??.inlet(), ??.out());
   4. });

again thanks for help

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Streams handling tcp request with dynamic response

2015-09-08 Thread paweł kamiński
thanks I ll check this out! I ve somehow overlooked it in documentation

On Tuesday, 8 September 2015 13:27:11 UTC+2, Akka Team wrote:
>
> Hi Pawel,
>
> Instead of Iterable and mapConcat you can use map, and 
> flatten(FlattenStrategy.concat).
>
> For example:
>
>   inputs.map { in => sourceFor(in) }.flatten(FlattenStrategy.concat)
>
> I have written some TCP examples for an older ML thread, they might help 
> to give you some inspiration: 
> https://gist.github.com/drewhk/25bf7472db04b5699b80
>
> (try to run them to see what they do)
>
> -Endre
>
> On Sun, Sep 6, 2015 at 7:36 PM, paweł kamiński <kam...@gmail.com 
> > wrote:
>
>> hi,
>> I have a problem with defining a flow that reacts to incoming data and 
>> producing dynamic response.
>>
>> lets say I have a tcp server
>>
>> Tcp
>> .get(system)
>> .bind(host, port)
>> .to(foreach(con -> {
>> con.handleWith(handlerFlow, materializer);
>> }))
>> .run(materializer);
>>
>>
>> now the handler flow is something like
>>
>>
>> Flow<ByteString, ByteString, BoxedUnit> handler = Flow.of(ByteString.class)
>> .via(Framing.delimiter(delimiter, maximumFrameLength, false))
>> .map(incoming -> {
>> * decode - read data - start producing response data based 
>> on read data.*
>> })
>> .map(outgoing -> outgoing.concat(delimiter));
>>
>>
>> now I would like to produce variable number of events as response. 
>>
>> mapConcat would be a good choice but it is required to return Iterable 
>> object rather than some kind of source, producer or stream. 
>>
>>
>> I tried to work with flow's factories and use Zip or Concat but they became 
>> immutable once constructed.
>>
>>
>> the scenario would be to response with different binary files based on 
>> incoming data over single connection.
>>
>>
>> thanks for help and clarifying 
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Akka Team
> Typesafe - Reactive apps on the JVM
> Blog: letitcrash.com
> Twitter: @akkateam
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka Streams handling tcp request with dynamic response

2015-09-06 Thread paweł kamiński
hi,
I have a problem with defining a flow that reacts to incoming data and 
producing dynamic response.

lets say I have a tcp server

Tcp
.get(system)
.bind(host, port)
.to(foreach(con -> {
con.handleWith(handlerFlow, materializer);
}))
.run(materializer);


now the handler flow is something like


Flow handler = Flow.of(ByteString.class)
.via(Framing.delimiter(delimiter, maximumFrameLength, false))
.map(incoming -> {
* decode - read data - start producing response data based on 
read data.*
})
.map(outgoing -> outgoing.concat(delimiter));


now I would like to produce variable number of events as response. 

mapConcat would be a good choice but it is required to return Iterable object 
rather than some kind of source, producer or stream. 


I tried to work with flow's factories and use Zip or Concat but they became 
immutable once constructed.


the scenario would be to response with different binary files based on incoming 
data over single connection.


thanks for help and clarifying 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: akka streams 1.0 and http routing - in java

2015-08-26 Thread paweł kamiński
that worked! you've saved my life again.

On Wednesday, 26 August 2015 10:27:44 UTC+2, Johannes Rudolph wrote:

 Hi paweł, 

 yes, that's an unfortunate detail of the Java API. Whatever you put in 
 there as the Object value it will be ignored. So, to convert your flow 
 you should be able to use `.ObjectmapMaterializedValue(m - m)` to 
 convert the type. 

 Johannes 


 On Tue, Aug 25, 2015 at 11:19 PM, paweł kamiński kam...@gmail.com 
 javascript: wrote: 
  OK I give up again :) 
  
  so if I have 
  
  SourceInteger, BoxedUnit source = producer.produce(10, 100); 
  
  
  and encoder that transforms integer to ByteStrings 
  
  
  final FlowInteger, ByteString, BoxedUnit encoder = Flow 
  .of(Integer.class) 
  .map(number - { 
  final ByteStringBuilder builder = new ByteStringBuilder(); 
  builder.putByte((byte) 4); 
  builder.putInt(number, ByteOrder.BIG_ENDIAN); 
  
  return builder.result(); 
  }) 
  .named(encoder); 
  
  
  I apply transformation as SourceByteString, BoxedUnit data = 
  source.via(encoder) but how do I get SourceByteString, Object data. 
  
  
  Flow.of(Integer.class) creates Flow that materializes to BoxUnit 
  
  
  what Object stands for? 
  
  
  On Tuesday, 11 August 2015 09:44:15 UTC+2, Johannes Rudolph wrote: 
  
  Hi paweł, 
  
  you are on the right track. You need to provide the data as a 
  `SourceByteString, Object`. Then, you can create a response like 
 this: 
  
  SourceByteString, Object data = ... 
  HttpResponse response = 
  
 HttpResponse.create().withEntity(HttpEntities.createChunked(contentType, 
  data)); 
  
  If you know the length of the data (i.e. the aggregated number of 
 octets 
  in all the ByteStrings the Source provides), you can also use 
  
  HttpEntityies.create(contentType, length, data) 
  
  to create a (non-chunked) streamed default entity. 
  
  HTH 
  Johannes 
  
  
  
  On Monday, August 10, 2015 at 10:38:40 PM UTC+2, paweł kamiński wrote: 
  hi, 
  probably again I ve overlooked it in documentation but I cannot find a 
 way 
  to respond to a http request with streamed response. probably 
  HttpEntityChunked should be used but again I cannot figure out how to 
 set 
  entities and flows correctly. 
  
  
  this is basic example 
  
  
  private RouteResult getEvents(RequestContext ctx) 
  { 
  
  final SourceInteger, BoxedUnit source = producer.produce(10, 100); 
  return ctx.complete(HttpResponse.create()); 
  } 
  
  
  thanks for any hint on that 
  
  -- 
  Read the docs: http://akka.io/docs/ 
  Check the FAQ: 
  http://doc.akka.io/docs/akka/current/additional/faq.html 
  Search the archives: https://groups.google.com/group/akka-user 
  --- 
  You received this message because you are subscribed to a topic in the 
  Google Groups Akka User List group. 
  To unsubscribe from this topic, visit 
  https://groups.google.com/d/topic/akka-user/kHNRr-Vqhzc/unsubscribe. 
  To unsubscribe from this group and all its topics, send an email to 
  akka-user+...@googlegroups.com javascript:. 
  To post to this group, send email to akka...@googlegroups.com 
 javascript:. 
  Visit this group at http://groups.google.com/group/akka-user. 
  For more options, visit https://groups.google.com/d/optout. 



 -- 
 Johannes 

 --- 
 Johannes Rudolph 
 http://virtual-void.net 


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka streams 1.0 and http routing - in java

2015-08-25 Thread paweł kamiński
OK I give up again :)

so if I have 

SourceInteger, BoxedUnit source = producer.produce(10, 100);


and encoder that transforms integer to ByteStrings


final FlowInteger, ByteString, BoxedUnit encoder = Flow
.of(Integer.class)
.map(number - {
final ByteStringBuilder builder = new ByteStringBuilder();
builder.putByte((byte) 4);
builder.putInt(number, ByteOrder.BIG_ENDIAN);

return builder.result();
})
.named(encoder);


I apply transformation as SourceByteString, BoxedUnit data = 
source.via(encoder) but how do I get SourceByteString, *Object* data.


Flow.of(Integer.class) creates Flow that materializes to BoxUnit


what Object stands for?


On Tuesday, 11 August 2015 09:44:15 UTC+2, Johannes Rudolph wrote:

 Hi paweł,

 you are on the right track. You need to provide the data as a 
 `SourceByteString, Object`. Then, you can create a response like this:

 SourceByteString, Object data = ...
 HttpResponse response = 
 HttpResponse.create().withEntity(HttpEntities.createChunked(contentType, 
 data));

 If you know the length of the data (i.e. the aggregated number of octets 
 in all the ByteStrings the Source provides), you can also use

 HttpEntityies.create(contentType, length, data)

 to create a (non-chunked) streamed default entity.

 HTH
 Johannes



 On Monday, August 10, 2015 at 10:38:40 PM UTC+2, paweł kamiński wrote:
 hi,
 probably again I ve overlooked it in documentation but I cannot find a way 
 to respond to a http request with streamed response. probably 
 HttpEntityChunked should be used but again I cannot figure out how to set 
 entities and flows correctly.


 this is basic example


 private RouteResult getEvents(RequestContext ctx)
 {

 final SourceInteger, BoxedUnit source = producer.produce(10, 100);
 return ctx.complete(HttpResponse.create());
 }


 thanks for any hint on that


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka streams 1.0 and http routing - in java

2015-08-11 Thread paweł kamiński
ah :]

thanks, I guess my bad was that I tried to use SourceT, BoxedUnit and it 
wasnt accepted... and I was blind :))

On Tuesday, 11 August 2015 09:44:15 UTC+2, Johannes Rudolph wrote:

 Hi paweł,

 you are on the right track. You need to provide the data as a 
 `SourceByteString, Object`. Then, you can create a response like this:

 SourceByteString, Object data = ...
 HttpResponse response = 
 HttpResponse.create().withEntity(HttpEntities.createChunked(contentType, 
 data));

 If you know the length of the data (i.e. the aggregated number of octets 
 in all the ByteStrings the Source provides), you can also use

 HttpEntityies.create(contentType, length, data)

 to create a (non-chunked) streamed default entity.

 HTH
 Johannes



 On Monday, August 10, 2015 at 10:38:40 PM UTC+2, paweł kamiński wrote:
 hi,
 probably again I ve overlooked it in documentation but I cannot find a way 
 to respond to a http request with streamed response. probably 
 HttpEntityChunked should be used but again I cannot figure out how to set 
 entities and flows correctly.


 this is basic example


 private RouteResult getEvents(RequestContext ctx)
 {

 final SourceInteger, BoxedUnit source = producer.produce(10, 100);
 return ctx.complete(HttpResponse.create());
 }


 thanks for any hint on that


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka streams 1.0 - actor publisher do not receive demand

2015-08-07 Thread paweł kamiński
totally agree, the second solution would work perfectly. BUT then I dont 
need custom publisher. 

I look at publishers as a way to integrate with legacy systems that push 
events and of course maybe I need to tune buffer limit a bit. 
I thought consumer will signal greater demand at some point, will have some 
logic to detect that it can consume more, or something. 

but yeah, my producer thread is filling up publisher's message box and 
consumer demand event is somewhere at the end. so most of messages are 
rejected. I thought that maybe demand event will have some sort of higher 
priority. 

On Friday, 7 August 2015 08:23:55 UTC+2, Patrik Nordwall wrote:

 You don't have any backpressure when sending the messages to the actor. 
 That is done outside of Akka streams. It will happily send all 100 messages 
 immediately (this is extremely fast) and fill up the buffer.
 If you would use a iterator based Source to emit the 100 elements instead 
 of sending them via the ActorPublisher actor you would have backpressure 
 all the way.

 What is it that you are trying to build?

 On Thu, Aug 6, 2015 at 9:42 PM, paweł kamiński kam...@gmail.com 
 javascript: wrote:

 thanks for your answer, 

 I am more curious if this is my configuration in env where 3 threads 
 compete for resources - and probably this causes the problem I ve described 
 - or this is a general problem with notifying publisher about demand.
 in other words if I have fast consumer (and adding to list is quite fast) 
 what is a throughput of publisher. I cannot imagine that I drop almost all 
 message in real life scenario. 

 I hope that above make any sense :)

 thanks for link for second question! I ve somehow overlooked it!

 On Wednesday, 5 August 2015 11:25:41 UTC+2, Patrik Nordwall wrote:



 On Sun, Aug 2, 2015 at 1:42 PM, paweł kamiński kam...@gmail.com wrote:

 hi,
 I have simple actor producer based on 
 http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-integrations.html#ActorPublisher

 public class MessageProducer extends AbstractActorPublisherMessage
 {
 private final static Logger logger = 
 LoggerFactory.getLogger(MessageProducer.class);
 private final ArrayDequeMessage buf;

 public MessageProducer(int maxBufferSize)
 {
 buf = new ArrayDeque(maxBufferSize);
 receive(ReceiveBuilder
 .match(Message.class, msg - buf.size() == 
 maxBufferSize,
msg - {
if (logger.isTraceEnabled())
{
logger.trace(Denying {}. buffer 
 size is {}., msg, buf.size());
}
sender().tell(new Fail(msg.getMDC(), 
 denied), self());
})
 .match(Message.class,
msg - {
buf.addLast(msg);
drain(totalDemand());
})
 .match(ActorPublisherMessage.Request.class, 
 request - drain(totalDemand()))
 .match(ActorPublisherMessage.Cancel.class, cancel 
 - stop())
 
 .match(ActorPublisherMessage.SubscriptionTimeoutExceeded.class, cancel - 
 stop())
 .match(Status.Success.class, cancel - stop())
 .match(PoisonPill.class, cancel - stop())
 .matchAny(this::unhandled)
 .build()
 );
 }

 @Override
 public Duration subscriptionTimeout()
 {
 return Duration.create(1, TimeUnit.SECONDS);
 }

 private void drain(long demand)
 {


 final int bufferSize = buf.size();

 logger.debug(Stream is active {}. {}, isActive(), bufferSize);
 if (!isActive())
 {
 return;
 }

 long maxItems = min(demand, bufferSize);

 logger.trace(Draining buffer with {} items. demand is {}., maxItems, 
 demand);
 Stream
 .iterate(0, i - i + 1)
 .limit(maxItems)
 .forEach(i - {
 Message msg = buf.poll();
 logger.trace(Sending message {}., msg);
 onNext(msg);
 });
 }

 private void stop() {
 context().stop(self());
 }
 }


 and I am creating the stream 


 final SourceMessage, ActorRef stringSource = 
 Source.actorPublisher(producerProps); // *--- I construct producer with 5 
 element buffer but actually it is irrelevant.*
 final ActorRef producerRef = stringSource
 .map(msg - msg.toString().toLowerCase())
 .to(Sink.foreach(item - {
 logger.info(got message {}, item);
 messages.add(item);
 }))
 .run(materializer);

 final int requestedElementsCount = 100;
 Thread thread = new Thread(() - {
 Stream.iterate(0, i - i + 1)
 .limit(requestedElementsCount)
 .forEach(i

[akka-user] akka streams 1.0 - actor publisher do not receive demand

2015-08-02 Thread paweł kamiński
hi,
I have simple actor producer based 
on 
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-integrations.html#ActorPublisher

public class MessageProducer extends AbstractActorPublisherMessage
{
private final static Logger logger = 
LoggerFactory.getLogger(MessageProducer.class);
private final ArrayDequeMessage buf;

public MessageProducer(int maxBufferSize)
{
buf = new ArrayDeque(maxBufferSize);
receive(ReceiveBuilder
.match(Message.class, msg - buf.size() == 
maxBufferSize,
   msg - {
   if (logger.isTraceEnabled())
   {
   logger.trace(Denying {}. buffer size is 
{}., msg, buf.size());
   }
   sender().tell(new Fail(msg.getMDC(), 
denied), self());
   })
.match(Message.class,
   msg - {
   buf.addLast(msg);
   drain(totalDemand());
   })
.match(ActorPublisherMessage.Request.class, request - 
drain(totalDemand()))
.match(ActorPublisherMessage.Cancel.class, cancel - 
stop())

.match(ActorPublisherMessage.SubscriptionTimeoutExceeded.class, cancel - 
stop())
.match(Status.Success.class, cancel - stop())
.match(PoisonPill.class, cancel - stop())
.matchAny(this::unhandled)
.build()
);
}

@Override
public Duration subscriptionTimeout()
{
return Duration.create(1, TimeUnit.SECONDS);
}

private void drain(long demand)
{


final int bufferSize = buf.size();

logger.debug(Stream is active {}. {}, isActive(), bufferSize);
if (!isActive())
{
return;
}

long maxItems = min(demand, bufferSize);

logger.trace(Draining buffer with {} items. demand is {}., maxItems, 
demand);
Stream
.iterate(0, i - i + 1)
.limit(maxItems)
.forEach(i - {
Message msg = buf.poll();
logger.trace(Sending message {}., msg);
onNext(msg);
});
}

private void stop() {
context().stop(self());
}
}


and I am creating the stream 


final SourceMessage, ActorRef stringSource = 
Source.actorPublisher(producerProps); // *--- I construct producer with 5 
element buffer but actually it is irrelevant.*
final ActorRef producerRef = stringSource
.map(msg - msg.toString().toLowerCase())
.to(Sink.foreach(item - {
logger.info(got message {}, item);
messages.add(item);
}))
.run(materializer);

final int requestedElementsCount = 100;
Thread thread = new Thread(() - {
Stream.iterate(0, i - i + 1)
.limit(requestedElementsCount)
.forEach(i - {
producerRef.tell(new Result(Index  + i), noSender());

//sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
});
});
thread.start();


after starting the thread I await if *messages* get *requestedElementsCount* 
elements but it never happens unless I add sleep to my thread above.


1) I cannot figure out why is that. first of all MessageProducer is active but 
I can see in logs that demand is 0 and then buffer fills up and more messages 
are denied. is this my system/jvm/etc?

I though that producer, message publisher and consumer runs on different 
threads and there should be no problem with consuming 100 messages, map them 
and put items to *messages* list. 


here is a sample output

13:24:42.813 [PRODUCER_AKKA_SYSTEM-akka.actor.default-dispatcher-7] TRACE 
c.f.d.a.s.p.actor.MessageProducer - Stream is active true. 1

13:21:04.029 [PRODUCER_AKKA_SYSTEM-akka.actor.default-dispatcher-7] TRACE 
c.f.d.a.s.p.actor.MessageProducer - Draining buffer with 0 items. demand is 0. 
*//  even there are elements in buffer, demand is 0*

13:24:42.813 [PRODUCER_AKKA_SYSTEM-akka.actor.default-dispatcher-7] TRACE 
c.f.d.a.s.p.actor.MessageProducer - Stream is active true. 2

13:21:04.034 [PRODUCER_AKKA_SYSTEM-akka.actor.default-dispatcher-7] TRACE 
c.f.d.a.s.p.actor.MessageProducer - Draining buffer with 0 items. demand is 
0.

13:24:42.813 [PRODUCER_AKKA_SYSTEM-akka.actor.default-dispatcher-7] TRACE 
c.f.d.a.s.p.actor.MessageProducer - Stream is active true. 3

13:21:04.034 [PRODUCER_AKKA_SYSTEM-akka.actor.default-dispatcher-7] TRACE 
c.f.d.a.s.p.actor.MessageProducer - Draining buffer with 0 items. demand is 
0.

13:24:42.813 [PRODUCER_AKKA_SYSTEM-akka.actor.default-dispatcher-7] TRACE 
c.f.d.a.s.p.actor.MessageProducer - Stream is active true. 4

13:21:04.035 [PRODUCER_AKKA_SYSTEM-akka.actor.default-dispatcher-7] TRACE 
c.f.d.a.s.p.actor.MessageProducer - Draining buffer with 0 items. demand is 
0.