[akka-user] Different response codes in complete?

2015-07-16 Thread Jason Martens
Using akka-http, say I have a route like this:

path('login) {
post {
entity(as[Credentials]) { credentials = 
complete(authenticateCredentials(credentials))
}
}
}


Is there a way to complete the request with a Unauthorized response if the 
credentials fail? For instance, I look up the user in the database, and get 
a None. How can I turn that into an Unauthorized HTTP response? 

Thanks,

Jason

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: ANNOUNCE: Akka Streams HTTP 1.0

2015-07-16 Thread Filippo De Luca
Awesome guys very good job!

On 16 July 2015 at 00:12, Viktor Klang viktor.kl...@gmail.com wrote:

 Awesome, Andrey!

 --
 Cheers,
 √
 On 15 Jul 2015 18:10, Andrey Kuznetsov f...@loathing.in wrote:

 Awesome, it's a historical day!

 I can't imagine how our project would look without Akka and, especially,
 Akka Streams.
 We were early adopters of Streams, used them to implement fully-features
 messaging platform (read self-hosted Layer.com). Just upgraded to Streams
 1.0 too! https://github.com/actorapp/actor-platform.
 We have been used them from the first milestone version and ran
 production on RC3. We used Akka project's SBT template. It helped us to
 provide modularity from the start of the project. We even used Akka's
 scalariform settings to make our code more clear.
 We ported Actors to Android, iOS and Web.

 We are very thankful to Akka Team for your product, for your help here in
 akka-user. In response to your contribution we will be glad to make our
 customers your customers too. Looking forward trying Typesafe Subscription
 with our production systems!

 On Wednesday, July 15, 2015 at 3:40:25 PM UTC+3, Konrad Malawski wrote:

 Dear hakkers,

 we—the Akka committers—are very pleased to announce the final release of
 Akka Streams  HTTP 1.0. After countless hours and many months of work we
 now consider Streams  HTTP good enough for evaluation and production use,
 subject to the caveat on performance below. We will continue to improve the
 implementation as well as to add features over the coming months, which
 will be marked as 1.x releases—in particular concerning HTTPS support
 (exposing certificate information per request and allowing session
 renegotiation) and websocket client features—before we finally add these
 new modules to the 2.4 development branch. In the meantime both Streams and
 HTTP can be used with Akka 2.4 artifacts since these are binary backwards
 compatibility with Akka 2.3.
 A Note on Performance

 Version 1.0 is fully functional but not yet optimized for performance.
 To make it very clear: Spray currently is a lot faster at serving HTTP
 responses than Akka HTTP is. We are aware of this and we know that a lot of
 you are waiting to use it in anger for high-performance applications, but
 we follow a “correctness first” approach. After 1.0 is released we will
 start working on performance benchmarking and optimization, the focus of
 the 1.1 release will be on closing the gap to Spray.
 What Changed since 1.0–RC4

-

Plenty documentation improvements on advanced stages
https://github.com/akka/akka/pull/17966, modularity
https://github.com/akka/akka/issues/17337 and Http javadsl
https://github.com/akka/akka/pull/17965,
-

Improvements to Http stability under high load
https://github.com/akka/akka/issues/17854,
-

The streams cook-book translated to Java
https://github.com/akka/akka/issues/16787,
-

A number of new stream operators: recover
https://github.com/akka/akka/pull/17998 and generalized UnzipWith
https://github.com/akka/akka/pull/17998 contributed by Alexander
Golubev,
-

The javadsl for Akka Http https://github.com/akka/akka/pull/17988
is now nicer to use from Java 8 and when returning Futures,
-

also Akka Streams and Http should now be properly packaged for OSGi
https://github.com/akka/akka/pull/17979, thanks to Rafał Krzewski.

 The complete list of closed tickets can be found in the 1.0 milestones
 of streams
 https://github.com/akka/akka/issues?q=milestone%3Astreams-1.0 and http
 https://github.com/akka/akka/issues?q=milestone%3Ahttp-1.0 on github.
 Release Statistics

 Since the RC4 release:

-

32 tickets closed
-

252 files changed, 16861 insertions (+), 1834 deletions(-),
-

… and a total of 9 contributors!

 commits added removed

   262342 335 Johannes Rudolph
   11   10112  97 Endre Sándor Varga
9 757 173 Martynas Mickevičius
82821 487 Konrad Malawski
3  28  49 2beaucoup
3 701 636 Viktor Klang
2  43   7 Rafał Krzewski
2 801  42 Alexander Golubev
1   8   8 Heiko Seeberger

 --

 Cheers,

 Konrad 'ktoso’ Malawski

 Akka http://akka.io/ @ Typesafe http://typesafe.com/

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 

[akka-user] Re: Newbie Questions About PersistentView and Populating Read Datastores

2015-07-16 Thread haghard
-That looks like a convenient syntax that does the same thing as a 
persistent view does, tailored for users of scalaz streams, or am I missing 
something? 
Yes

-Can you take multiple eventlogs and combine them into one reproducible 
ordered stream. So that if you make decisions (validate preconditions 
against that read model) based on that stream, can you guarantee that the 
next replay up would recreate the same results? I think that is where it 
becomes very tricky.

The order you emit event is all up to you, scalaz-streams provide api for 
this purposes. 
You can build your read order based on some meta data in events or just 
read first available
For example 
 
https://github.com/haghard/sport-center/blob/b35ccec59c65c674334ff7a1836bb54e829a1ed5/query-side-results/src/main/scala/view/ResultsViewRouter.scala#L43
  

среда, 15 июля 2015 г., 19:33:19 UTC+3 пользователь Magnus Andersson 
написал:

 Hi

 Hagard: That looks like a convenient syntax that does the same thing as a 
 persistent view does, tailored for users of scalaz streams, or am I missing 
 something? 

 Can you take multiple eventlogs and combine them into one reproducible 
 ordered stream. So that if you make decisions (validate preconditions 
 against that read model) based on that stream, can you guarantee that the 
 next replay up would recreate the same results? I think that is where it 
 becomes very tricky.

 /Magnus

 Den tisdag 14 juli 2015 kl. 13:51:25 UTC+2 skrev haghard:


 Hi,

 Please take a look at https://github.com/krasserm/streamz
 You can read concrete journal with ```replay(processorA)``` and write 
 in anywhere manually without interaction with PersistentView

 Hope it helps

 вторник, 14 июля 2015 г., 10:20:29 UTC+3 пользователь Amiri Barksdale 
 написал:

 I've been reading up here on PersistentActor, and I think I get how that 
 works to perform commands and write the result to an event store. I also 
 think I understand that PersistentViews can subscribe to a PersistentActor 
 and receive notification of each event stored for that PersistentActor 
 type. I want to take a PersistentView and use it to update a separate Read 
 datastore.

 I don't want to treat the PersistentView itself as a read store, but I 
 want to make it trigger the creation or updating or saving of a 
 projection of the event in some other store, like, e.g., Elasticsearch or 
 Postgresql. Are there any guidelines, best practices, or examples of how to 
 do this?

 One thread (
 https://groups.google.com/forum/#!searchin/akka-user/persistentview/akka-user/rMHjwBZpocQ/SmfAGMg7G68J)
  
 from June 2014 seemed to indicate that this sort of writing to another 
 store would require PersistentViews to be able to read from multiple 
 PersistentActors for this to be feasible. Is this still true?

 What is the path to take here, to get the actor system populating my 
 read stores? Are people instead creating projections directly from the 
 event store itself, like Greg Young's EventStore allows?

 Any insight is welcome!

 Amiri



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Newbie Questions About PersistentView and Populating Read Datastores

2015-07-16 Thread Giovanni Alberto Caporaletti
ok thanks.. I guess in this simple case I don't need a persistent view, I 
could directly save (asynchronously) the events on the read store so that I 
don't have to wait the 5 secs.

Persistent views are not much use if I don't keep a modified (aggregated or 
not) in-memory view of my data, or at least that's what I understood. 
If I only need to save the event on a view store I could just send a 
message to a regular actor. It would create coupling (push) but not having 
delay can be important sometimes...

Has anyone tested the overhead of setting auto-update-interval to something 
like 2-300ms?

Cheers
G

On Thursday, 16 July 2015 02:43:25 UTC+2, Magnus Andersson wrote:

 Hi

 Giovanni: Yes, I suppose you could write directly to a read store in this 
 case. As I wrote in my case I wanted the projection in memory as well to 
 validate conditions before creating output (like allowing the user to 
 download a file).

 Amiri: Yes that is correct. The difference between 1 and 2 in the pull 
 model is that order does matter in the second scenario. I tried to describe 
 the second scenario in contrast to the first scenario which is the simple 
 case.

 Let's say you have a read model where a certain condition might change 
 depending on the order events were read, this particular data depends on 
 multiple events each stored in a separate event log. How do you guarantee 
 that the read model will look the same and that all the decisions you took 
 based on that model previously is still valid next time you create a 
 projection and write a new read model (ie. replay your views from the 
 beginning)? 

 You would need some way to remember the order the messages were retrieved 
 and processed the first time around in the aggregated view when you create 
 your projection. For Get event store my understanding is that they can 
 guarantee ordering across multiple event logs for their aggregated view (a 
 shared global event counter?), I believe Kafka also have this guarantee for 
 their topics. 

 In the case of multiple Akka PersistentActors, each with their own event 
 counter and event log, you don't have this guarantee. If you wish to work 
 around this limitation and have consistent projections were you create a 
 snapshot across multiple event logs that also is reproducible, then you 
 need to record the order your aggregator received the events. 

 I had to give this a good think over again, so thank you for your replies. 
 Not saying the scenarios I described are perfect in any way, just trying to 
 figure out how to do aggregation over event logs without today, when 
 support from the akka libraries are not there yet. :)

 /Magnus

 Den onsdag 15 juli 2015 kl. 20:03:46 UTC+2 skrev Amiri Barksdale:


 On Wednesday, July 15, 2015 at 10:43:55 AM UTC-7, Giovanni Alberto 
 Caporaletti wrote:

 Hi Magnus.
 a question concerning your pull model: if the updates are idempotent, 
 why would I need a parent aggregate? Can't the views directly populate the 
 read store?  



 This is a great question, and I would like clarification on this as well, 
 because I don't understand how part 1 and 2 hang together after all. It 
 seems like part 2 of this pull model, with the Parent aggregate 
 PersistentActor, is actually meant to *impose* an order across the 
 children. Magnus, is this the case?

 Amiri



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Connecting over Https using Akka-Http client

2015-07-16 Thread Jeroen Rosenberg
Thanks, that did the trick. For the record, I was doing:

 def gunzip(bytes: Array[Byte]) = {
   val output = new ByteArrayOutputStream()
   FileUtils.copyAll(new GZIPInputStream(new ByteArrayInputStream(bytes)), 
 output))
   output.toString
 }

 

 ... // further in the code as part of my Flow graph
 .map(byteString = gunzip(byteString.toArray()))


I replaced it with 

 .via(Gzip.decoderFlow)


Now it works :)

Thanks so much for your help!
On Thursday, July 16, 2015 at 4:24:11 PM UTC+2, Johannes Rudolph wrote:

 Hi Jeroen, 

 On Thu, Jul 16, 2015 at 4:05 PM, Jeroen Rosenberg 
 jeroen.r...@gmail.com javascript: wrote: 
  Anyways, it's working as expected. I do have one small unrelated issue. 
 The 
  stream I am consuming is in Gzip format, so I'm unzipping the 
 ByteStrings as 
  they come in as part of my FlowGraph. However, when I try to unzip the 
  ByteString coming out of the response.entity.dataBytes source I'm 
 getting 
  errors on the Gzip format. As if the Chunks I'm getting are incomplete. 

 Can you show some code? Are you using 
 `akka.http.scaladsl.coding.Gzip`? If not, can you try if that works? 
 You should be able to either use `Gzip.decode(response)`, 
 `Gzip.decode(response.entity)`, or use `Gzip.decoderFlow` manually. 

 Johannes 


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Connecting over Https using Akka-Http client

2015-07-16 Thread Jeroen Rosenberg
Hi Johannes, 

I found out that I made quite a stupid mistake. In my set up of the client 
I was already putting host  port and in the creation of the HttpRequest I 
was creating fully qualified URL instead of Uri without host and port. 
Apparantly, my mock service didn't care about this :/ It was also difficult 
to see the problem, because in the logs it appeared as if I was calling the 
correct URL.

Anyways, it's working as expected. I do have one small unrelated issue. The 
stream I am consuming is in Gzip format, so I'm unzipping the ByteStrings 
as they come in as part of my FlowGraph. However, when I try to unzip the 
ByteString coming out of the response.entity.dataBytes source I'm getting 
errors on the Gzip format. As if the Chunks I'm getting are incomplete. If 
I use java.net stuff combined with InputStreamSource:

val url = new URL(endpoint.toString)

val connection = url.openConnection().asInstanceOf[HttpURLConnection]

connection.setRequestProperty(Authorization, 
 req.getHeader(Authorization).get.value)

connection.setRequestProperty(Accept-Encoding, 
 req.getHeader(Accept-Encoding).get.value)

InputStreamSource(() = new GZIPInputStream(connection.getInputStream)).map 
 { processor ! _ }.runWith(Sink.ignore)

 


It works fine. This ensures I'm getting elements line by line. Is the 
 response.entity.dataBytes chunking it differently by default or do you 
have any other idea what I'm doing wrong?

Jeroen


On Wednesday, July 15, 2015 at 11:59:47 PM UTC+2, Johannes Rudolph wrote:

 Hi Jeroen, 

 it would be very helpful if you could somehow come up with a 
 reproducer against some publicly accessible endpoint which would show 
 the issue. It seemed to work for all the URLs I tested. 

 Johannes 

 On Wed, Jul 15, 2015 at 6:25 PM, Jeroen Rosenberg 
 jeroen.r...@gmail.com javascript: wrote: 
  Btw, 
  
  I tried connecting to the stream using plain old 
 java.net.HttpUrlConnection 
  
  val connection = new java.net.URL(...).openConnection() 
  
  // set headers 
 connection.getInputStream 
 connection.getResponseCode 
  
  this way it just works and I get status code 200. So it seems something 
 goes 
  wrong in akka http. 
  
  Jeroen 
  
  On Wednesday, July 15, 2015 at 5:01:32 PM UTC+2, Jeroen Rosenberg wrote: 
  
  Thnx Johannes for the swift reply :) 
  
  I'm using JDK 7. I strongly suspect the host I connect to 
  (stream.gnip.com) to be a virtual host (as they also provide other 
 endpoints 
  such as api.gnip.com). I just tried with 1.0 and it gives me the same 
  result. 
  
  Jeroen 
  
  On Wednesday, July 15, 2015 at 4:47:10 PM UTC+2, Johannes Rudolph 
 wrote: 
  
  Hi Jeroen, 
  
  is this a virtual host you are connecting against? This may hint 
 towards 
  the client not sending the TLS SNI extension correctly which could be 
 a bug 
  in akka-http or due to an old JDK version on your client. Which JDK 
 version 
  do you use? 
  
  https://en.wikipedia.org/wiki/Server_Name_Indication says that you 
 would 
  need at least JDK 7 to make SNI work (only relevant if the host you 
 connect 
  against is an HTTPS virtual host). 
  
  Also, you could try the just released 1.0 version (though I cannot 
 think 
  of a reason why that should fix it). 
  
  Johannes 
  
  On Wednesday, July 15, 2015 at 4:13:14 PM UTC+2, Jeroen Rosenberg 
 wrote: 
  
  I'm trying to connect to a third party streaming API over HTTPS using 
  akka-stream-experimental % 1.0-RC4 and akka-http-experimental % 
  1.0-RC4 
  
  My code looks like this 
  
  class GnipStreamHttpClient(host: String, account: String, 
 processor: 
  ActorRef) extends Actor with ActorLogging { 
  
this: Authorization = 
  
  
private val system = context.system 
  
private val endpoint = Uri(shttps://$host/somepath;) 
  
private implicit val executionContext = system.dispatcher 
  
private implicit val flowMaterializer: Materializer = 
  ActorMaterializer(ActorMaterializerSettings(system)) 
  
  
val client = Http(system).outgoingConnectionTls(host, port, 
 settings 
  = ClientConnectionSettings(system)) 
  
  
override def receive: Receive = { 
  
  case response: HttpResponse if response.status.intValue / 100 
 == 2 
  = 
  
response.entity.dataBytes.map(processor ! 
  _).runWith(Sink.ignore) 
  
  case response: HttpResponse = 
  
log.info(sGot unsuccessful response $response) 
  
  case _ = 
  
val req = HttpRequest(GET, 
  endpoint).withHeaders(`Accept-Encoding`(gzip), 
 Connection(Keep-Alive)) ~ 
  authorize 
  
log.info(sMaking request: $req) 
  
Source.single(req) 
  
  .via(client) 
  
  .runWith(Sink.head) 
  
  .pipeTo(self) 
  
} 
  
  } 
  
  
  As a result I'm getting an Http 404 response. This doesn't make much 
  sense to me as when I copy the full url to curl it just works 
  
  curl --compressed -v -uuser:pass 
 https://my.streaming.api.com/somepath 
  
  
  Also when I connect to a mock 

Re: [akka-user] Re: Connecting over Https using Akka-Http client

2015-07-16 Thread 'Johannes Rudolph' via Akka User List
Hi Jeroen,

On Thu, Jul 16, 2015 at 4:05 PM, Jeroen Rosenberg
jeroen.rosenb...@gmail.com wrote:
 Anyways, it's working as expected. I do have one small unrelated issue. The
 stream I am consuming is in Gzip format, so I'm unzipping the ByteStrings as
 they come in as part of my FlowGraph. However, when I try to unzip the
 ByteString coming out of the response.entity.dataBytes source I'm getting
 errors on the Gzip format. As if the Chunks I'm getting are incomplete.

Can you show some code? Are you using
`akka.http.scaladsl.coding.Gzip`? If not, can you try if that works?
You should be able to either use `Gzip.decode(response)`,
`Gzip.decode(response.entity)`, or use `Gzip.decoderFlow` manually.

Johannes

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Creating Source from external service - Redis, SQS, etc...

2015-07-16 Thread Alexander Zafirov
Hello, 

Before I explain my question I want to say I tried looking for a problem 
previously explored which relates to mine but didn't find anything. If I am 
mistaken please excuse me and if possible provide me with a link to the 
right thread :)

I'm new to Akka Streams and am currently exploring the possibilities of 
integrating it's great capabilities when interacting with a queuing service 
as ones mentioned in the title. My use case with Akka Streams is to be able 
construct a source from an incoming message from the queue that I am 
subscribed to/am polling. Having went through the Akka docs, on the 
*Integration* page I found the sending emails to tweet's authors example 
come closest to my problem. 

We start with the tweet stream of authors:


   1. val authors: Source[Author, Unit] =
   2.   tweets
   3. .filter(_.hashtags.contains(akka))
   4. .map(_.author)


It is unclear to me how these *tweets* come about? 

Furthermore after examining the Source object's apply methods I want to 
know:

   1. Should a Source be created every time a new object is deserialized 
   from a particular queue?
   2. A Kafka consumer that I looked at extends ActorPublisher? Does that 
   mean that if I have to wrap an actor around the client for the specific 
   queue in order to be able to create a source from that?

Thank you,
Alex

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] ConsistentHash Router - Determine destination node for message based on current cluster configuration

2015-07-16 Thread Henning Els
I make use of a cluster configuration with consistent hashing routing.

One of my actors represents a physical resource, and messages from the 
resource to the actor are routed using consistent hashing.  Thanks to this 
configuration, the actor is highly available.

The question that now arises is when this actor needs to listen for events 
in the cluster using the DistributedPubSubMediator.  Given that this actor 
might've traveled between nodes (based on cluster changes) I may have one 
of these running on multiple machines.  I'd like only the active actor, 
i.e. the actor to which messages would currently be forwarded to, to react 
to the events.

Is there a way that this actor can check if it is the consistent hash 
target, without sending a message to the router and allowing the router to 
route the message?  I.e. I'm hoping to find a library call that I can pass 
the message to, and have it return the node to which it would be forwarded 
to.

This is currently done in Java, although any advice will be helpful.

Thanks in advance.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Connecting over Https using Akka-Http client

2015-07-16 Thread 'Johannes Rudolph' via Akka User List
Hi Jeroen,

On Thu, Jul 16, 2015 at 4:35 PM, Jeroen Rosenberg
jeroen.rosenb...@gmail.com wrote:
 def gunzip(bytes: Array[Byte]) = {
   val output = new ByteArrayOutputStream()
   FileUtils.copyAll(new GZIPInputStream(new ByteArrayInputStream(bytes)),
 output))
   output.toString
 }

This creates a new GZIPInputStream for every chunk (incidentally: how
the chunks are cut is not under your control). However, gzip
compression is stateful and recreating the GZIPInputStream will reset
the state every time you create a new instance. Therefore, it cannot
work as simple as this.

The `Gzip.decoderFlow`, in contrast, keeps this state between chunks.

-- 
Johannes

---
Johannes Rudolph
http://virtual-void.net

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka HTTP (Scala) 1.0-RC[12]: Processor actor terminated abruptly

2015-07-16 Thread Jakub Liska
Samuel how did you manage to enable this logging : 

[DEBUG] [04/30/2015 22:36:01.921] [default-akka.actor.default-dispatcher-8] 
[akka://default/system/deadLetterListener] stopped [DEBUG] [04/30/2015 
22:36:01.922] [default-akka.actor.default-dispatcher-5] 
[akka://default/user/$a/flow-2-3-publisherSource-processor-mapConcat] 
stopped [DEBUG] [04/30/2015 22:36:01.922] 
[default-akka.actor.default-dispatcher-6] 
[akka://default/user/$a/flow-2-9-publisherSource-PoolConductor.retryMerge-
flexiMerge-PoolConductor.retryMerge-flexiMerge] stopped [DEBUG] [04/30/2015 
22:36:01.922] [default-akka.actor.default-dispatcher-5] 
[akka://default/user/$a/flow-2-2-publisherSource-processor-
PoolSlot.SlotEventSplit-flexiRoute] stopped [DEBUG] [04/30/2015 
22:36:01.923] [default-akka.actor.default-dispatcher-12] 
[akka://default/user/$a/flow-2-4-publisherSource-Merge] stopped [DEBUG] 
[04/30/2015 22:36:01.923] [default-akka.actor.default-dispatcher-14] 
[akka://default/user/$a/flow-2-1-publisherSource-Merge] stopped [DEBUG] 
[04/30/2015 22:36:01.923] [default-akka.actor.default-dispatcher-10] 
[akka://default/user/$a/flow-2-11-publisherSource-PoolConductor.retryMerge-
flexiMerge-PoolConductor.RetrySplit-flexiRoute] stopped [DEBUG] [04/30/2015 
22:36:01.923] [default-akka.actor.default-dispatcher-12] 
[akka://default/user/$a/flow-2-12-publisherSource-processor-PoolSlot.SlotEventSplit-flexiRoute]
 
stopped 

I have akka loglevel set to DEBUG and logback too, but these messages are 
not logged, thanks

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka Cluster ⇒ AssociationError Error [Invalid address]

2015-07-16 Thread Eugene Dzhurinsky
Hello!

Recently I updated Akka from 2.3.9 to 2.3.11, and for some reason my 
cluster started to fall apart. From time to time I'm getting errros like 
this:

INFO   | jvm 1| 2015/07/16 11:45:39 | 2015-07-16 16:45:39,369 ERROR  [
EndpointWriter] AssociationError [akka.tcp://HttpCluster@192.168.0.203:2551] 
- [akka
.tcp://HttpCluster@192.168.0.200:2551]: Error [Invalid address: 
akka.tcp://HttpCluster@192.168.0.200:2551] [   

INFO   | jvm 1| 2015/07/16 11:45:39 | akka.remote.InvalidAssociation: 
Invalid address: akka.tcp://HttpCluster@192.168.0.200:2551 
  
INFO   | jvm 1| 2015/07/16 11:45:39 | Caused by: akka.remote.transport.
Transport$InvalidAssociationException: The remote system has quarantined 
this system
. No further associations to the remote system are possible until this 
system is restarted.
INFO   | jvm 1| 2015/07/16 11:45:40 | 2015-07-16 16:45:40,526 WARN   [
ReliableDeliverySupervisor] Association with remote system [akka.tcp:
//HttpCluster@19
2.168.0.202:2551] has failed, address is now gated for [5000] ms. Reason: [
Disassociated] 
 
INFO   | jvm 1| 2015/07/16 11:45:40 | 2015-07-16 16:45:40,543 WARN   [
EndpointWriter] AssociationError [akka.tcp://HttpCluster@192.168.0.203:2551] 
- [akka
.tcp://HttpCluster@192.168.0.204:2551]: Error [Invalid address: 
akka.tcp://HttpCluster@192.168.0.204:2551]

I don't see any suspicious activities in logs, like connection reset or 
some other network errors, it just fails. The cluster-specific 
configuration looks like below:

cluster {
auto-down-unreachable-after = 10s

failure-detector {
  threshold = 10
  heartbeat-interval = 10s
  acceptable-heartbeat-pause = 30 s
}

role {
  scheduler.min-nr-of-members = 1
  chunk.min-nr-of-members = 1
  http.min-nr-of-members = 1
}

}



Can somebody please advice how can I troubleshoot this problem? Or at least 
how can I intercept that cluster error and *restart* the cluster node that 
failed?

Thank you!

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Promises and message passing

2015-07-16 Thread Jeff
If everything is local an no serialization is required, would it be worth 
it? From the documentation on the website, it indicates that Ask has a 
performance cost associated with it.

On Wednesday, July 15, 2015 at 8:02:57 PM UTC-7, √ wrote:

 Because you can't serialize a promise (if the actor decides to send the 
 message to another node)

 On Thu, Jul 16, 2015 at 4:51 AM, Jeff jknigh...@gmail.com javascript: 
 wrote:

 Is there any reason why I wouldn't want to pass a Promise to a (local) 
 ActorRef to be resolved? I know the Ask pattern exists, but I would like to 
 avoid the cost of having to allocate PromiseActorRefs.

 Thanks
 Jeff

 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 √
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka Cluster Performance

2015-07-16 Thread Jim Hazen
Akka clustering is built on top of akka remoting. The default serialization 
used by remoting is terribly slow. Look into swapping it out with a third party 
serializer. There are some linked/mentioned in the remoting docs. 

Please post any results if a serializer change helps you. I know in my project 
a 200k/sec spray http server gets 5k/s out of the default remoting layer.  
Fortunately 5k is good enough for that service, so I haven't messed with the 
serializer. Others have had good luck and Kryo seems popular. 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Promises and message passing

2015-07-16 Thread Viktor Klang
Jeff,

On Thu, Jul 16, 2015 at 8:38 PM, Jeff jknight12...@gmail.com wrote:

 If everything is local an no serialization is required, would it be worth
 it? From the documentation on the website, it indicates that Ask has a
 performance cost associated with it.


Regrettably, my crystal ball is still in the repair shop so you'd have to
measure according to your requirements. :)




 On Wednesday, July 15, 2015 at 8:02:57 PM UTC-7, √ wrote:

 Because you can't serialize a promise (if the actor decides to send the
 message to another node)

 On Thu, Jul 16, 2015 at 4:51 AM, Jeff jknigh...@gmail.com wrote:

 Is there any reason why I wouldn't want to pass a Promise to a (local)
 ActorRef to be resolved? I know the Ask pattern exists, but I would like to
 avoid the cost of having to allocate PromiseActorRefs.

 Thanks
 Jeff

 --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives:
 https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to akka-user+...@googlegroups.com.
 To post to this group, send email to akka...@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 --
 Cheers,
 √

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
√

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Removing persistence events and snapshots

2015-07-16 Thread Christopher Oman
I am struggling somewhat with the documentation on sequence number and 
snapshot sequence number. My use case is that I have persistent actors that 
represent user sessions. When a user logs out (or the session times out), 
there is no longer a need to keep the persistent events and snapshots 
around. What is the pattern I would use to clean up all of cruft?

Thanks,
Chris

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka HTTP (Scala) 1.0-RC[12]: Processor actor terminated abruptly

2015-07-16 Thread Chad Retz
Probably via withDebug on the materializer settings, e.g. 
ActorMaterializer(ActorMaterializerSettings(system).withDebugLogging(true))

On Thursday, July 16, 2015 at 11:28:28 AM UTC-5, Jakub Liska wrote:

 Samuel how did you manage to enable this logging : 

 [DEBUG] [04/30/2015 22:36:01.921] [default-akka.actor.default-dispatcher-8] 
 [akka://default/system/deadLetterListener] stopped [DEBUG] [04/30/2015 
 22:36:01.922] [default-akka.actor.default-dispatcher-5] 
 [akka://default/user/$a/flow-2-3-publisherSource-processor-mapConcat] 
 stopped [DEBUG] [04/30/2015 22:36:01.922] 
 [default-akka.actor.default-dispatcher-6] 
 [akka://default/user/$a/flow-2-9-publisherSource-PoolConductor.retryMerge-
 flexiMerge-PoolConductor.retryMerge-flexiMerge] stopped [DEBUG] 
 [04/30/2015 22:36:01.922] [default-akka.actor.default-dispatcher-5] 
 [akka://default/user/$a/flow-2-2-publisherSource-processor-
 PoolSlot.SlotEventSplit-flexiRoute] stopped [DEBUG] [04/30/2015 
 22:36:01.923] [default-akka.actor.default-dispatcher-12] 
 [akka://default/user/$a/flow-2-4-publisherSource-Merge] stopped [DEBUG] 
 [04/30/2015 22:36:01.923] [default-akka.actor.default-dispatcher-14] 
 [akka://default/user/$a/flow-2-1-publisherSource-Merge] stopped [DEBUG] 
 [04/30/2015 22:36:01.923] [default-akka.actor.default-dispatcher-10] 
 [akka://default/user/$a/flow-2-11-publisherSource-
 PoolConductor.retryMerge-flexiMerge-PoolConductor.RetrySplit-flexiRoute] 
 stopped [DEBUG] [04/30/2015 22:36:01.923] 
 [default-akka.actor.default-dispatcher-12] 
 [akka://default/user/$a/flow-2-12-publisherSource-processor-PoolSlot.SlotEventSplit-flexiRoute]
  
 stopped 

 I have akka loglevel set to DEBUG and logback too, but these messages are 
 not logged, thanks


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.