Re: [akka-user][deprecated] Re: [akka-user] Announcing discuss.akka.io!

2018-03-16 Thread Eric Swenson
Konrad,

Will new topics continue to be pushed to the mailing list as they are 
published on the discuss.akka.io web site?  I have avidly followed the 
mailing list and use it to learn of new developments and read up on topics 
of interest to me. I relied on the "push" nature of the mailing list to let 
me know topics of interest. Having to check a web site (and remembering to 
do so) means, for me, that I would no longer be able to keep up-to-date 
with akka developments. It just requires too much effort to "follow" 
changes in a web site.  

If the mailing list will remain as a means of pushing notifications of all 
the threads on discuss.akka.io, and the only change is that we cannot reply 
to the mailing list, I'm perfectly happy with that. As long as I can click 
on a link on a mailing list digest entry and end up at the right place on 
discuss.akka.io so that I can contribute and/or read the whole discussion, 
that's great.

Thanks. -- Eric

-- 
*
** New discussion forum: https://discuss.akka.io/ replacing akka-user 
google-group soon.
** This group will soon be put into read-only mode, and replaced by 
discuss.akka.io
** More details: https://akka.io/blog/news/2018/03/13/discuss.akka.io-announced
*
>> 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user][deprecated] Re: [akka-user] Announcing discuss.akka.io!

2018-03-16 Thread Eric Swenson
Thanks, Patrick. Exactly what I needed. -- Eric

 

From:  on behalf of Patrik Nordwall 

Reply-To: 
Date: Friday, March 16, 2018 at 14:57
To: 
Subject: Re: [akka-user][deprecated] Re: [akka-user] Announcing discuss.akka.io!

 

Hi Eric,

You can setup email notifications as described in 
https://discuss.lightbend.com/t/how-to-use-the-discuss-forum-as-a-mailing-list/61

/Patrik

fre 16 mars 2018 kl. 22:01 skrev Eric Swenson :

Konrad,

Will new topics continue to be pushed to the mailing list as they are published 
on the discuss.akka.io web site?  I have avidly followed the mailing list and 
use it to learn of new developments and read up on topics of interest to me. I 
relied on the "push" nature of the mailing list to let me know topics of 
interest. Having to check a web site (and remembering to do so) means, for me, 
that I would no longer be able to keep up-to-date with akka developments. It 
just requires too much effort to "follow" changes in a web site.  

If the mailing list will remain as a means of pushing notifications of all the 
threads on discuss.akka.io, and the only change is that we cannot reply to the 
mailing list, I'm perfectly happy with that. As long as I can click on a link 
on a mailing list digest entry and end up at the right place on discuss.akka.io 
so that I can contribute and/or read the whole discussion, that's great.

Thanks. -- Eric

-- 
*
** New discussion forum: https://discuss.akka.io/ replacing akka-user 
google-group soon.
** This group will soon be put into read-only mode, and replaced by 
discuss.akka.io
** More details: https://akka.io/blog/news/2018/03/13/discuss.akka.io-announced
*
>>>>>>>>>> 
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

-- 
*
** New discussion forum: https://discuss.akka.io/ replacing akka-user 
google-group soon.
** This group will soon be put into read-only mode, and replaced by 
discuss.akka.io
** More details: https://akka.io/blog/news/2018/03/13/discuss.akka.io-announced
*
>>>>>>>>>> 
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to a topic in the Google 
Groups "Akka User List" group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/akka-user/lgOoHc6jE2A/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


-- 
*
** New discussion forum: https://discuss.akka.io/ replacing akka-user 
google-group soon.
** This group will soon be put into read-only mode, and replaced by 
discuss.akka.io
** More details: https://akka.io/blog/news/2018/03/13/discuss.akka.io-announced
*
>>>>>>>>>> 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user][deprecated] Re: Memory Leak : akka.actor.RepointableActorRef

2019-01-02 Thread Eric Swenson
Hi Andrzej,

Yes, it was fixed in a later release of akka-http.  Johannes (from 
Lightbend) asked me to upload a memory dump, which I did, and he (or 
someone else at Lightbend) made a fix to akka-http.  They entered an issue 
for the bug:  https://github.com/akka/akka-http/issues/851.  And I believe 
it was merged in this PR:  https://github.com/akka/akka-http/pull/852.  
10.0.4 of akka-http included a fix for the issue.  -- Eric

Yes, I believe Johannes or someone else from the Lightbend team found an 
issue, made a fix, and recommended I try test out my issue with a fixed 
version of akka-http.  

On Wednesday, January 2, 2019 at 7:11:11 AM UTC-8, and...@tipser.com wrote:
>
> Hi Eric,
> we are having the same issue. Did you finally debugged/resolved it? 
>
> Many thanks,
> Andrzej
>  
>
> On Monday, February 6, 2017 at 9:15:11 PM UTC+1, Eric Swenson wrote:
>>
>> Hi Johannes,
>>
>> Yes, it appears reproducible. I'm trying to narrow down whether this 
>> started happening when I upgraded to 10.0.2 or 10.0.3 of akka-http. When I 
>> learn this, I'll let you know.  
>>
>> I can share the memory dump with you. Could you write me via private 
>> email with the best way to get it to you?  -- Eric
>>
>> On Friday, February 3, 2017 at 3:02:19 PM UTC-8, Johannes Rudolph wrote:
>>>
>>> Hi Eric,
>>>
>>> we'd like to look into that. It looks as if a streams materializer is 
>>> holding on to some memory but we need more info to see what it keeps 
>>> exactly to.
>>>
>>> Is this a reproducible scenario? Could you share the memory dump (in 
>>> private) with us? Otherwise, could you send the list of top consumers (by 
>>> numbers and / or bytes) as seen in MAT?
>>>
>>> Thanks,
>>> Johannes
>>>
>>> On Friday, February 3, 2017 at 2:56:04 PM UTC-7, Eric Swenson wrote:
>>>>
>>>> I have an akka-http/akka-streams application that I’ve recently 
>>>> upgraded to 10.0.3.  After handling many requests, it runs out of memory. 
>>>>  Using the Eclipse MAT, I see this message:
>>>>
>>>> One instance of *"akka.actor.RepointableActorRef"* loaded by 
>>>> *"sun.misc.Launcher$AppClassLoader 
>>>> @ 0x8b58"* occupies *1,815,795,896 (98.53%)* bytes. The memory is 
>>>> accumulated in one instance of 
>>>> *"scala.collection.immutable.RedBlackTree$BlackTree"* loaded by 
>>>> *"sun.misc.Launcher$AppClassLoader 
>>>> @ 0x8b58"*.
>>>>
>>>> *Keywords*
>>>> akka.actor.RepointableActorRef
>>>> scala.collection.immutable.RedBlackTree$BlackTree
>>>> sun.misc.Launcher$AppClassLoader @ 0x8b58
>>>>
>>>> Does this ring any bells?  How might I track down what is causing this? 
>>>>  
>>>>
>>>> I *believe* that I’ve stressed this service before (in a similar way) 
>>>> and NOT seen this failure. I think I was running 10.0.1 and upgraded to 
>>>> 10.0.2 and then 10.0.3 before running the test again.
>>>>
>>>> I don’t really know much about MAT (first time user), but I believe 
>>>> what the “Shortest Paths To The Accumulation Point” report is telling me 
>>>> that akka.stream.impl.ActorMaterializerImpl is what is creating the 
>>>> RepointableActorRef.  I am using akka-streams and I am passing blocks of 
>>>> data of 1MB size through the streams.  But as far as I know, I shouldn’t 
>>>> be 
>>>> accumulating them.  Also, I have successfully run a test of this magnitude 
>>>> before without running out of memory.  
>>>>
>>>> Any suggestions?
>>>>
>>>> — Eric
>>>>
>>>>
>>>>
>>>>

-- 
*
** New discussion forum: https://discuss.akka.io/ replacing akka-user 
google-group soon.
** This group will soon be put into read-only mode, and replaced by 
discuss.akka.io
** More details: https://akka.io/blog/news/2018/03/13/discuss.akka.io-announced
*
>>>>>>>>>> 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Marshalling FormData to multi-part mime with one field per part

2015-10-11 Thread Eric Swenson
AWS/S3 HTTP Post with a policy requires an HTTP post with multipart-mime -- 
one part for the file and one part, each, for various form parameters. When 
I use akka.http and attempt to marshal FormData to an Entity and use that 
to create a multi-part mime message, all the form data go into one 
mime-part using the query string format (foo=bar&baz=quux). AWS rejects 
such a request as it doesn't want all form parameters in a single 
www-url-encoded-form-data part, but rather separate mime parts for each 
parameter. How to I cause a Form to be martialled in this way using 
akka.http?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Marshalling FormData to multi-part mime with one field per part

2015-10-11 Thread Eric Swenson
Hey Scott!

When you get a chance, if you can point me to the right one, I'd appreciate 
it. I've been looking at the sources, tests, and examples, and I can't find 
how to do it.  What I'm after is a way to take a Form, and emit a 
multi-part MIME message with one part for each field in the form and one 
part for the single file I need to upload. Using code like this:

 val formBodyPartFuture = for {

  httpEntity <- Marshal(form).to[HttpEntity]

  strictEntity <- httpEntity.toStrict(timeout)

} yield Multipart.FormData.BodyPart("foo", strictEntity)


gives me one body part with Content-Type x-www-url-encoded, and the set of 
all fields are encoded as in a query string.If you can point me to the 
"other" way of marshalling a form to a set of mime body parts, I'd 
appreciate it. I had pretty much given up (and googling and using stack 
overflow have turned up nothing) and was going to generate the parts 
manually.  i'm sure there must be a correct way to do this.  Thanks. -- 
 Eric




On Sunday, October 11, 2015 at 9:43:20 PM UTC-7, Scott Maher wrote:
>
> I can't test this as I am not at home but there are multiple form 
> marshallers, one for url encoding and one for multipart called, I think, 
> Multipart.FormData. sorry if you already know this.
>
> Hi Eric! :P
> On Oct 11, 2015 9:18 PM, "Eric Swenson" > 
> wrote:
>
>> AWS/S3 HTTP Post with a policy requires an HTTP post with multipart-mime 
>> -- one part for the file and one part, each, for various form parameters. 
>> When I use akka.http and attempt to marshal FormData to an Entity and use 
>> that to create a multi-part mime message, all the form data go into one 
>> mime-part using the query string format (foo=bar&baz=quux). AWS rejects 
>> such a request as it doesn't want all form parameters in a single 
>> www-url-encoded-form-data part, but rather separate mime parts for each 
>> parameter. How to I cause a Form to be martialled in this way using 
>> akka.http?
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Marshalling FormData to multi-part mime with one field per part

2015-10-13 Thread Eric Swenson
I have not been able to find any way to marshall a set of HTTP Post form 
parameters AND a file using akka-http, where the form data parameters are 
NOT www-url-encoded, but rather each placed in a separate MIME part, with 
no Content-Type header. This appears to be the format required in order to 
do an HTTP POST to AWS S3 with a policy.  Furthermore, it appears AWS/S3 is 
not happy with Chunked HTTP POST requests, and therefore I've had to use 
toStrict on all my entities.  I've resorted to some hacky code, which 
works, but I find it surprising that there is no built-in way to generate a 
multi-part MIME message where post parameters are not combined into one 
www-url-encoded query string.  

Here is my (admittedly ugly) code:

val m = Map(

  "key" -> "upload/quux.txt",

  "acl" -> acl,

  "policy" -> policy,

  "X-amz-algorithm" -> "AWS4-HMAC-SHA256",

  "X-amz-credential" -> s"$awsAccessKeyId/$shortDate/$region/$service
/aws4_request",

  "X-amz-date" -> date,

  "X-amz-expires" -> expires.toString,

  "X-amz-signature" -> signature

)



implicit val timeout: FiniteDuration = 10.seconds



val uri = s"http://$bucket.$service-$region.amazonaws.com";



// create request entities for each of the POST parameters. Note that 
generating Strict entities is an asynchronous

// operation (returns a future). Strict entities are NOT chunked. It 
appears AWS is not happy with chunked MIME

// body parts.

val strictFormDataEntityFuture = m.map { case (k, v) => (k, 
HttpEntity(ContentTypes.NoContentType, 
v).toStrict(timeout)) }



val responseFuture = for {

  strictEntityMap <- Future.sequence(strictFormDataEntityFuture.map(
entry => entry._2.map(i => (entry._1, i.map(_.toMap) recoverWith { case 
t: Throwable => throw t }

  formDataBodyPartList = strictEntityMap.map {e => 
Multipart.FormData.BodyPart.Strict(e._1, e._2)}.toList

  fileEntity <- HttpEntity(ContentTypes.`application/octet-stream`, 3, 
Source.single(ByteString("foo"))).toStrict(timeout) recoverWith { case t: 
Throwable => throw t }

  multipartForm = Multipart.FormData(Source(formDataBodyPartList ++List
(Multipart.FormData.BodyPart("file", fileEntity, Map("filename" -> "foo.bin"
)

  requestEntity <- Marshal(multipartForm).to[RequestEntity] recoverWith 
{ case t: Throwable => throw t }

  strictRequestEntity <- requestEntity.toStrict(timeout) recoverWith { 
case t: Throwable => throw t }

  response <- Http().singleRequest(HttpRequest(method = 
HttpMethods.POST, uri = uri, entity = strictRequestEntity)) recoverWith { 
case t: Throwable => throw t }

  responseEntity <- Unmarshal(response.entity).to[String] recoverWith { 
case t: Throwable => throw t }

} yield (responseEntity, response)


If anyone can suggest a better approach, I'd appreciate it.  Thanks. 

On Sunday, October 11, 2015 at 10:45:37 PM UTC-7, Eric Swenson wrote:
>
> Hey Scott!
>
> When you get a chance, if you can point me to the right one, I'd 
> appreciate it. I've been looking at the sources, tests, and examples, and I 
> can't find how to do it.  What I'm after is a way to take a Form, and emit 
> a multi-part MIME message with one part for each field in the form and one 
> part for the single file I need to upload. Using code like this:
>
>  val formBodyPartFuture = for {
>
>   httpEntity <- Marshal(form).to[HttpEntity]
>
>   strictEntity <- httpEntity.toStrict(timeout)
>
> } yield Multipart.FormData.BodyPart("foo", strictEntity)
>
>
> gives me one body part with Content-Type x-www-url-encoded, and the set of 
> all fields are encoded as in a query string.If you can point me to the 
> "other" way of marshalling a form to a set of mime body parts, I'd 
> appreciate it. I had pretty much given up (and googling and using stack 
> overflow have turned up nothing) and was going to generate the parts 
> manually.  i'm sure there must be a correct way to do this.  Thanks. -- 
>  Eric
>
>
>
>
> On Sunday, October 11, 2015 at 9:43:20 PM UTC-7, Scott Maher wrote:
>>
>> I can't test this as I am not at home but there are multiple form 
>> marshallers, one for url encoding and one for multipart called, I think, 
>> Multipart.FormData. sorry if you already know this.
>>
>> Hi Eric! :P
>> On Oct 11, 2015 9:18 PM, "Eric Swenson"  wrote:
>>
>>> AWS/S3 HTTP Post with a policy requires an HTTP post with multipart-mime 
>>> -- one part for the file and one part, each, for v

Re: [akka-user] Marshalling FormData to multi-part mime with one field per part

2015-10-14 Thread Eric Swenson
Hi Mathias,

I think you are suggesting pretty much exactly what I did, but I did it in such 
a way that I didn’t have to make each body part (corresponding to a form 
parameter) manually (I used mapping).  

However, AWS/S3 will not accept the Content-Type: text/plain in the body parts. 
It thinks the request is trying to “upload” multiple files.  I had to use 
ContentTypes.NoContentType. 

See below for my approach (where m is a map of the parameter/value pairs):

val strictFormDataEntityFuture = m.map { case (k, v) => (k, 
HttpEntity(ContentTypes.NoContentType, v).toStrict(timeout)) }

That produces a Map[String, Future[HttpEntity.Strict]], which is then converted 
to a Future[Map[String,HttpEntity.Strict]] with:

for {
  strictEntityMap <- Future.sequence(strictFormDataEntityFuture.map(entry 
=> entry._2.map(i => (entry._1, i.map(_.toMap)
…

AWS/S3 doesn’t want the form data parts, nor the whole HTTP POST to be chunked. 
So those entities have to be made “strict”.  So I had to make each MIME part be 
strict, as well as the final Multipart.FormData entity.

The point of my original post was that it seems the kaka-http library has made 
a policy decision that the only way to convert a FormData to an entity (in a 
multi-part MIME body) is to create a single part, using www-url-encoding, and 
include all fields in query string syntax. This doesn’t work for AWS and 
therefore it seems a limiting policy decision. It should be equally 
possible/easy to convert a FormData to a collection of parts. The requirement 
that there should be a Content-Type header in these parts, also should not be 
dictated by the framework.

— Eric


> On Oct 14, 2015, at 5:21 AM, Mathias  wrote:
> 
> Eric,
> 
> you can create a multipart/formdata request like this, for example:
> 
> val formData = {
>   def utf8TextEntity(content: String) = {
> val bytes = ByteString(content)
> HttpEntity.Default(ContentTypes.`text/plain(UTF-8)`, bytes.length, 
> Source.single(bytes))
>   }
>   
>   import Multipart._
>   FormData(Source(
> FormData.BodyPart("foo", utf8TextEntity("FOO")) ::
>   FormData.BodyPart("bar", utf8TextEntity("BAR")) :: Nil))
> }
> 
> val request = HttpRequest(POST, entity = formData.toEntity())
> 
> This will render a chunked request that looks something like this:
> 
> POST / HTTP/1.1
> Host: example.com
> User-Agent: akka-http/test
> Transfer-Encoding: chunked
> Content-Type: multipart/form-data; boundary=o5bjIygPrPrCCYUp8FyfvsQe
> 
> 71
> --o5bjIygPrPrCCYUp8FyfvsQe
> Content-Type: text/plain; charset=UTF-8
> Content-Disposition: form-data; name=foo
> 
> 
> 3
> FOO
> 73
> 
> --o5bjIygPrPrCCYUp8FyfvsQe
> Content-Type: text/plain; charset=UTF-8
> Content-Disposition: form-data; name=bar
> 
> 
> 3
> BAR
> 1e
> 
> --o5bjIygPrPrCCYUp8FyfvsQe--
> 0
> 
> If you want to force the rendering of an unchunked request you can call 
> `toStrict` on the `formData` first.
> This would create a request looking something like this:
> 
> POST / HTTP/1.1
> Host: example.com
> User-Agent: akka-http/test
> Content-Type: multipart/form-data; boundary=1QipXPrd9L25eJwcQYgiNbfA
> Content-Length: 264
> 
> --1QipXPrd9L25eJwcQYgiNbfA
> Content-Type: text/plain; charset=UTF-8
> Content-Disposition: form-data; name=foo
> 
> FOO
> --1QipXPrd9L25eJwcQYgiNbfA
> Content-Type: text/plain; charset=UTF-8
> Content-Disposition: form-data; name=bar
> 
> BAR
> --1QipXPrd9L25eJwcQYgiNbfA--
> 
> There is no www-url-encoding happening anywhere.
> 
> Cheers,
> Mathias
> 
> 
> On Tuesday, October 13, 2015 at 11:33:43 PM UTC+2, Eric Swenson wrote:
> I have not been able to find any way to marshall a set of HTTP Post form 
> parameters AND a file using akka-http, where the form data parameters are NOT 
> www-url-encoded, but rather each placed in a separate MIME part, with no 
> Content-Type header. This appears to be the format required in order to do an 
> HTTP POST to AWS S3 with a policy.  Furthermore, it appears AWS/S3 is not 
> happy with Chunked HTTP POST requests, and therefore I've had to use toStrict 
> on all my entities.  I've resorted to some hacky code, which works, but I 
> find it surprising that there is no built-in way to generate a multi-part 
> MIME message where post parameters are not combined into one www-url-encoded 
> query string.  
> 
> Here is my (admittedly ugly) code:
> 
> val m = Map(
> 
>   "key" -> "upload/quux.txt",
> 
>   "acl"

Re: [akka-user] Marshalling FormData to multi-part mime with one field per part

2015-10-15 Thread Eric Swenson
On Thursday, October 15, 2015 at 12:24:48 AM UTC-7, Johannes Rudolph wrote:
>
> Have you seen/tried 
> http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html? It 
> seems to suggest that chunked transfer-encoding is supported, though in a 
> bit complicated format where on top of chunked transfer-encoding AWS seems 
> to introduce their own chunked content-encoding layer where each chunk is 
> signed separately.
>

Yes, but I don't know if this works for S3 HTTP POSTs, and I don't need 
each chunk separately signed.  

>  
>
>> The point of my original post was that it seems the kaka-http library has 
>> made a policy decision
>>
>
> WDYM? akka-http supports both `FormData` = www-url-encoding and 
> `multipart/formdata` encodings, so, I wonder which "policy decision" you 
> mean?
>

Perhaps I just couldn't find out how to do it (except very manually as you 
saw in my code sample). You can easily create a single HTTP entity via 
Marshal(formData).to[HttpEntity], where that entity will encode all form 
parameters/values as a query string. How do you, using existing 
marshallers, create a List[HttpEntity] given a FormData object? The bias is 
that there is no direct way to do this. I can clearly map over all the 
fields in the FormData myself and marshall each name/value pair to an 
HttpEntity. But by default, doing so uses ContentType.`text/plain`. You 
need your own custom logic to force the content type to NoContentType, 
which is what we need in this case. 

>  
>
>> that the only way to convert a FormData to an entity (in a multi-part 
>> MIME body) is to create a single part, using www-url-encoding, and include 
>> all fields in query string syntax. This doesn’t work for AWS and therefore 
>> it seems a limiting policy decision.
>>
>
> Yes, because `scaladsl.model.FormData` models www-url-encoding and not 
> multipart/formdata. You seem to suggest that there should be a more general 
> encoding that would allow marshalling to either of both variants. In any 
> case, there's no policy decision but, as explained above, two different 
> models for two different kinds of modelling things.
>

That is the bias I'm talking about. FormData is biased toward 
www-url-encoding rather than multipart/formdata. Both should be provided. 

>
> It should be equally possible/easy to convert a FormData to a collection 
>> of parts. 
>>
>
> Yes, that could be useful, or otherwise a more general representation of 
> form data that unifies both kinds.
>  
>
>> The requirement that there should be a Content-Type header in these 
>> parts, also should not be dictated by the framework.
>>
>
>  I see that you have to fight a bit to get akka-http to render exactly 
> what you want, but the reason seems to be mostly that AWS has very specific 
> rules and interpretations of how to use HTTP. 
>

Actually, AWS is expecting an entity that is exactly what all browsers 
would generate when you include one or more files in an HTTP form.  All the 
form fields including the files go in separate multi-part mime parts. The 
files have a Content-Type header and the non-files don't. Now I don't have 
any issue with the non-strict bias in akka-http. That is the more general, 
and more async-friendly approach. So it doesn't bother me that you have to 
do something special to get the various entities to be strict. Here, an 
expression that maps over the body parts converting each from non-strict to 
strict and returning a Future[Multipart.FormData] (or whatever it would be) 
would be fine.

akka-http is a general purpose HTTP library and cannot foresee all possible 
> deviations or additional constraints HTTP APIs try to enforce. So, in the 
> end, akka-http should make it possible to connect to these APIs (as long as 
> they support HTTP to a large enough degree) but it may mean that the extra 
> complexity the API enforces lies in your code. You could see that as a 
> feature.
>

I don't disagree with anything you've said there. But there is a bias 
toward www-url-encode form data. And it would be helpful if akka-http made 
it as easy to create Multipart.FormData entities. 

>
> That said, I'm pretty sure that there's some sugar missing in the APIs 
> that would help you build those requests more easily. If you can distill 
> the generic helpers that are missing from your particular use case we could 
> discuss how to improve things.
>
> Here are some things I can see:
>
>  * you can directly build a `Multipart.FormData.Strict` which can be 
> directly converted into a Strict entity, I guess one problem that we could 
> solve is that there's only a non-strict FormData.BodyPart.fromFile` which 
> builds a streamed part to prevent loading the complete file into memory. 
> There's no `FormData.BodyPart.fromFile` that would actually load the file 
> and create a strict version of it. We could add that (even if it wouldn't 
> be recommended to load files into memory...)
>  * having to run marshalling and unmarshalling manually could be

[akka-user] akka-http, websockets, and cluster sharding

2015-10-30 Thread Eric Swenson
I have a web service that up until my addition of server-side web socket 
support, worked fine.

I use akka-http to handle HTTP requests, and a cluster sharding region to 
route messages (sourced from the HTTP request) to actors that handle a 
particular "job" (the job id is the id used for sharding).  The actors are 
using akka persistence to maintain state.  This all works fine.

I recently updated the http handling to accept web socket connections from 
clients. In the initial ws:// request, the above-mentioned "job id" is 
included in the URI. The web socket Flow (modeled after the one found in 
this article: 
 http://blog.scalac.io/2015/07/30/websockets-server-with-akka-http.html) 
routes incoming web socket messages to the "correct" actor (based on the 
akka cluster sharding, by using the sharding region as the target actor). 
 This also works fine.  When the web socket is connected, a message is sent 
to the appropriate actor providing the actor source (to respond to).   My 
cluster sharded actor updates its state to make note of that actor, so it 
can send messages to the websocket.  That is what doesn't work.  When the 
cluster sharded actor tries to send a message to the actorRef it received 
during websocket connection, I get akka errors:



background log: info: route: ws-connect

background log: info: route: 
experimentInstanceId=187785cd-0276-4f04-a1a4-12e3d4487b81

background log: info: created new connection: 
org.genecloud.eim.WebSocketConnection@4b496e66

background log: info: [WARN] [10/30/2015 11:23:48.830] 
[ClusterSystem-akka.remote.default-remote-dispatcher-6] 
[akka.cluster.ClusterActorRefProvider] Error while resolving address 
[akka://HttpSystem] due to [No transport is loaded for protocol: [akka], 
available protocols: [akka.tcp]]

background log: info: [INFO] [10/30/2015 11:23:48.831] 
[ClusterSystem-akka.actor.default-dispatcher-20] 
[ExperimentInstance(akka://ClusterSystem)] Web socket connected for 
experiment instance 187785cd-0276-4f04-a1a4-12e3d4487b81

background log: info: [INFO] [10/30/2015 11:23:48.842] 
[ClusterSystem-akka.actor.default-dispatcher-15] 
[akka://HttpSystem/user/$a/flow-28-7-publisherSource-stageFactory-stageFactory-bypassRouter-flexiRoute-actorRefSource-actorRefSource]
 
Message [org.genecloud.eim.WsMessage] from 
Actor[akka://ClusterSystem/system/sharding/ExperimentInstance/86/187785cd-0276-4f04-a1a4-12e3d4487b81#-126303652]
 
to 
Actor[akka://HttpSystem/user/$a/flow-28-7-publisherSource-stageFactory-stageFactory-bypassRouter-flexiRoute-actorRefSource-actorRefSource#1354295246]
 
was not delivered. [2] dead letters encountered. This logging can be turned 
off or adjusted with configuration settings 'akka.log-dead-letters' and 
'akka.log-dead-letters-during-shutdown'.

background log: info: txt=[187785cd-0276-4f04-a1a4-12e3d4487b81] message #1

background log: info: WsIncomingMessage: 
experimentInstanceId=187785cd-0276-4f04-a1a4-12e3d4487b81

background log: info: txt=[187785cd-0276-4f04-a1a4-12e3d4487b81] message #2

background log: info: WsIncomingMessage: 
experimentInstanceId=187785cd-0276-4f04-a1a4-12e3d4487b81

---


The first 3 log messages show the HTTP route accepting the web socket 
connection with path /ws-connect/:JobId.  The WebSocketConnection object 
holds the flow.  The 5th log message shows the cluster sharded actor 
logging a message that the web socket has been connected.  The 4th log 
message is a mystery to me and probably going to cause the issue when the 
cluster sharded actor attempts to send a message back to the actor handling 
the web socket.  The code that sends the message looks like this:

case WsConnected(experimentInstanceId: String, websocketActor: 
ActorRef) =>

  logger.info(s"Web socket connected for experiment instance $
experimentInstanceId")

  persist(WebSocketConnectedEvent(websocketActor)) { evt =>

state = state.updated(evt)

websocketActor ! WsMessage("foo", "bar")

  }

The websocketActor that is supplied in the message came from the Flow, 
where the relevant portion is here in the WebSocketConnection class:

  // Materialized value of Actor

  val actorAsSource = builder.materializedValue.map(actor => 
WsConnected(experimentInstanceId, actor))


I'm not entirely sure where that actor comes from, but it appears that you 
can't send messages back to that actor from a potentially remote, clustered 
actor.  Is there some limitation here that I'm fighting with?


Note: I fully realize that I'm expecting a lot from akka here.  In the 
system I'm building, I'm intended to have multiple akka-http nodes behind a 
load balancer.  A user will therefore send http requests to one of the many 
akka-http nodes. But he will end up establishing a web socket connection 
with exactly one of these nodes (at a time).  Both http and web socket 
messages will get routed to an appropriate worker actor based on a sharded 
id.  That actor may be on some other node (not he one handl

Re: [akka-user] akka-http, websockets, and cluster sharding

2015-11-01 Thread Eric Swenson
Thank you, Patrick. That was, indeed, the problem. I had had earlier problems 
trying to get things working with a single actor system and that prompted me to 
move to two. I had no problems with the two actor systems until I added web 
sockets, and particularly when I tried to send messages from actors in the 
ClusterSystem to the HttpSystem actor Source for the web socket stream. That 
issue was caused by the local actor system. 

However, despite changing the HttpSystem to use RemoteActorRefProvider, and 
making sure that each actor used distinct ports, things still didn't work. I 
ended up going back to a single actor system. I ran back into a non-functioning 
aka-http -- the same issue that had prompted me to use two. It turned out that 
that issue was due to not waiting until the cluster actor system was "ready" 
before calling Http().bindAndHandle(...).

In any case, all is working now, with the single cluster actor system.

-- Eric


> On Nov 1, 2015, at 01:16, Patrik Nordwall  wrote:
> 
> This is the interesting log entry:
> 
> ClusterSystem-akka.remote.default-remote-dispatcher-6] 
> [akka.cluster.ClusterActorRefProvider] Error while resolving address 
> [akka://HttpSystem] due to [No transport is loaded for protocol: [akka], 
> available protocols: [akka.tcp]]
> 
> Looks like you have one ActorSystem named HttpSystem that is using 
> LocalActorRefProvider. This can also happen if you share actor refs between 
> two actor systems in the same jvm in a wrong way.
> 
> I think it would be easiest if you don't use a separate HttpSystem, and 
> instead run those things in the ActorSystem named ClusterSystem.
> 
> /Patrik
>> fre 30 okt. 2015 kl. 19:57 skrev Eric Swenson :
>> I have a web service that up until my addition of server-side web socket 
>> support, worked fine.
>> 
>> I use akka-http to handle HTTP requests, and a cluster sharding region to 
>> route messages (sourced from the HTTP request) to actors that handle a 
>> particular "job" (the job id is the id used for sharding).  The actors are 
>> using akka persistence to maintain state.  This all works fine.
>> 
>> I recently updated the http handling to accept web socket connections from 
>> clients. In the initial ws:// request, the above-mentioned "job id" is 
>> included in the URI. The web socket Flow (modeled after the one found in 
>> this article:  
>> http://blog.scalac.io/2015/07/30/websockets-server-with-akka-http.html) 
>> routes incoming web socket messages to the "correct" actor (based on the 
>> akka cluster sharding, by using the sharding region as the target actor).  
>> This also works fine.  When the web socket is connected, a message is sent 
>> to the appropriate actor providing the actor source (to respond to).   My 
>> cluster sharded actor updates its state to make note of that actor, so it 
>> can send messages to the websocket.  That is what doesn't work.  When the 
>> cluster sharded actor tries to send a message to the actorRef it received 
>> during websocket connection, I get akka errors:
>> 
>> 
>> 
>> background log: info: route: ws-connect
>> 
>> background log: info: route: 
>> experimentInstanceId=187785cd-0276-4f04-a1a4-12e3d4487b81
>> 
>> background log: info: created new connection: 
>> org.genecloud.eim.WebSocketConnection@4b496e66
>> 
>> background log: info: [WARN] [10/30/2015 11:23:48.830] 
>> [ClusterSystem-akka.remote.default-remote-dispatcher-6] 
>> [akka.cluster.ClusterActorRefProvider] Error while resolving address 
>> [akka://HttpSystem] due to [No transport is loaded for protocol: [akka], 
>> available protocols: [akka.tcp]]
>> 
>> background log: info: [INFO] [10/30/2015 11:23:48.831] 
>> [ClusterSystem-akka.actor.default-dispatcher-20] 
>> [ExperimentInstance(akka://ClusterSystem)] Web socket connected for 
>> experiment instance 187785cd-0276-4f04-a1a4-12e3d4487b81
>> 
>> background log: info: [INFO] [10/30/2015 11:23:48.842] 
>> [ClusterSystem-akka.actor.default-dispatcher-15] 
>> [akka://HttpSystem/user/$a/flow-28-7-publisherSource-stageFactory-stageFactory-bypassRouter-flexiRoute-actorRefSource-actorRefSource]
>>  Message [org.genecloud.eim.WsMessage] from 
>> Actor[akka://ClusterSystem/system/sharding/ExperimentInstance/86/187785cd-0276-4f04-a1a4-12e3d4487b81#-126303652]
>>  to 
>> Actor[akka://HttpSystem/user/$a/flow-28-7-publisherSource-stageFactory-stageFactory-bypassRouter-flexiRoute-actorRefSource-actorRefSource#1354295246]
>>  was not delivered. [2] dead letters encountered. This logging can be turned 
>> off or adjusted with configuration settings 'akka.

[akka-user] akka clusters on AWS ECS

2016-02-01 Thread Eric Swenson
Are there any good solutions for deploying akka clustered applications in 
docker containers running under AWS/ECS?  While there are proposed 
solutions for AWS/EC2 with autoscaling, none of them targets AWS/ECS.  In a 
nutshell, the issue is this:

If your cluster-nodes are implemented as docker containers, and formalized 
as ECS tasks that are members of an ECS service (in an ECS cluster), then 
when they are started and stopped (by ECS) in response to load, their IP 
addresses are impossible to predict. It is not possible to configure a 
fixed set of seed nodes at known IP addresses unless these seed nodes are 
managed independently of ECS automatic management. And, of course, having 
fixed, known seed nodes becomes problematic if/when these nodes fail.  

I could roll my own solution using a key/value store like redis, where 
whenever a node comes up, it first publishes its IP addresses (and port, if 
not common) to the key/value store, and then queries that store to 
configure its seed nodes. Care would have to be taken to clean up old 
entries when nodes fail, of course. But it seems like I shouldn't be the 
first person to run into akka-cluster issues on AWS/ECS and that other 
solutions might be out there. Unfortunately, I've found none.

Any ideas?

-- Eric

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka clusters on AWS ECS

2016-02-01 Thread Eric Swenson
Probably a better roll-my-own solution would involve querying AWS/ECS 
dynamically to discover the IP addresses of the containers running in the 
cluster, excluding the IP address of node making the query, and setting the 
seed nodes to the remaining nodes. This way, the likely of getting the list of 
nodes out-of-sync with reality is minimal.  — Eric

> On Feb 1, 2016, at 2:43 PM, Eric Swenson  wrote:
> 
> Are there any good solutions for deploying akka clustered applications in 
> docker containers running under AWS/ECS?  While there are proposed solutions 
> for AWS/EC2 with autoscaling, none of them targets AWS/ECS.  In a nutshell, 
> the issue is this:
> 
> If your cluster-nodes are implemented as docker containers, and formalized as 
> ECS tasks that are members of an ECS service (in an ECS cluster), then when 
> they are started and stopped (by ECS) in response to load, their IP addresses 
> are impossible to predict. It is not possible to configure a fixed set of 
> seed nodes at known IP addresses unless these seed nodes are managed 
> independently of ECS automatic management. And, of course, having fixed, 
> known seed nodes becomes problematic if/when these nodes fail.  
> 
> I could roll my own solution using a key/value store like redis, where 
> whenever a node comes up, it first publishes its IP addresses (and port, if 
> not common) to the key/value store, and then queries that store to configure 
> its seed nodes. Care would have to be taken to clean up old entries when 
> nodes fail, of course. But it seems like I shouldn't be the first person to 
> run into akka-cluster issues on AWS/ECS and that other solutions might be out 
> there. Unfortunately, I've found none.
> 
> Any ideas?
> 
> -- Eric
> 
> 
> -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/ <http://akka.io/docs/>
> >>>>>>>>>> Check the FAQ: 
> >>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html 
> >>>>>>>>>> <http://doc.akka.io/docs/akka/current/additional/faq.html>
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user 
> >>>>>>>>>> <https://groups.google.com/group/akka-user>
> --- 
> You received this message because you are subscribed to a topic in the Google 
> Groups "Akka User List" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/akka-user/MnxSZma4QJI/unsubscribe 
> <https://groups.google.com/d/topic/akka-user/MnxSZma4QJI/unsubscribe>.
> To unsubscribe from this group and all its topics, send an email to 
> akka-user+unsubscr...@googlegroups.com 
> <mailto:akka-user+unsubscr...@googlegroups.com>.
> To post to this group, send email to akka-user@googlegroups.com 
> <mailto:akka-user@googlegroups.com>.
> Visit this group at https://groups.google.com/group/akka-user 
> <https://groups.google.com/group/akka-user>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka clusters on AWS ECS

2016-02-02 Thread Eric Swenson
Hi Odd,

Thanks. I've looked at constructr and see that it would work fine and is a 
general solution not tied to AWS/ECS. However, it does add more moving 
parts to a deployment and has the potential to get out of sync with the 
source of truth of the cluster definition -- ECS itself. I will, however, 
consider it, as there is some beauty in a general solution.  -- Eric

On Tuesday, February 2, 2016 at 1:15:17 AM UTC-8, Odd Möller wrote:
>
> Hi Eric, have you looked at https://github.com/hseeberger/constructr?
>
> Greetings
> Odd
>
> On Tue, Feb 2, 2016 at 9:26 AM, Roland Kuhn  > wrote:
>
>> Yes, this sounds like a good strategy to me.
>>
>> Regards,
>>
>> Roland
>>
>> 1 feb 2016 kl. 23:56 skrev Eric Swenson 
>> >:
>>
>> Probably a better roll-my-own solution would involve querying AWS/ECS 
>> dynamically to discover the IP addresses of the containers running in the 
>> cluster, excluding the IP address of node making the query, and setting the 
>> seed nodes to the remaining nodes. This way, the likely of getting the list 
>> of nodes out-of-sync with reality is minimal.  — Eric
>>
>> On Feb 1, 2016, at 2:43 PM, Eric Swenson > 
>> wrote:
>>
>> Are there any good solutions for deploying akka clustered applications in 
>> docker containers running under AWS/ECS?  While there are proposed 
>> solutions for AWS/EC2 with autoscaling, none of them targets AWS/ECS.  In a 
>> nutshell, the issue is this:
>>
>> If your cluster-nodes are implemented as docker containers, and 
>> formalized as ECS tasks that are members of an ECS service (in an ECS 
>> cluster), then when they are started and stopped (by ECS) in response to 
>> load, their IP addresses are impossible to predict. It is not possible to 
>> configure a fixed set of seed nodes at known IP addresses unless these seed 
>> nodes are managed independently of ECS automatic management. And, of 
>> course, having fixed, known seed nodes becomes problematic if/when these 
>> nodes fail.  
>>
>> I could roll my own solution using a key/value store like redis, where 
>> whenever a node comes up, it first publishes its IP addresses (and port, if 
>> not common) to the key/value store, and then queries that store to 
>> configure its seed nodes. Care would have to be taken to clean up old 
>> entries when nodes fail, of course. But it seems like I shouldn't be the 
>> first person to run into akka-cluster issues on AWS/ECS and that other 
>> solutions might be out there. Unfortunately, I've found none.
>>
>> Any ideas?
>>
>> -- Eric
>>
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "Akka User List" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/akka-user/MnxSZma4QJI/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>>
>> *Dr. Roland Kuhn*
>> *Akka Tech Lead*
>> Typesafe <http://typesafe.com/> – Reactive apps on the JVM.
>> twitter: @rolandkuhn
>> <http://twitter.com/#!/rolandkuhn>
>>
>> -- 
>> >>>>>>>>>>

Re: [akka-user] akka clusters on AWS ECS

2016-02-02 Thread Eric Swenson
Hello Odd,

Thanks. That article is what motivated me to write an AWS/ECS solution, since 
the details of cluster node discovery are different under AWS/ECS than under 
AWS/EC2/Autoscaling. 

Now mind you, it isn’t that I’m really looking for an AWS-specific solution — 
in fact the disadvantages of such a system are that it ties us to AWS. However, 
it does seem less error prone to consult the source of truth of cluster node 
addresses — AWS/ECS in this case rather than try to duplicate that state in 
another store/cache.  

— Eric

> On Feb 2, 2016, at 10:30 AM, Odd Möller  wrote:
> 
> Hi Eric
> 
> For an AWS specific solution perhaps Chris Loy's blog post on that subject 
> can be of interest: 
> http://chrisloy.net/2014/05/11/akka-cluster-ec2-autoscaling.html 
> <http://chrisloy.net/2014/05/11/akka-cluster-ec2-autoscaling.html>.
> 
> Greetings
> Odd
> 
> On 2 feb. 2016, at 19:01, Eric Swenson  <mailto:e...@swenson.org>> wrote:
> 
>> Hi Odd,
>> 
>> Thanks. I've looked at constructr and see that it would work fine and is a 
>> general solution not tied to AWS/ECS. However, it does add more moving parts 
>> to a deployment and has the potential to get out of sync with the source of 
>> truth of the cluster definition -- ECS itself. I will, however, consider it, 
>> as there is some beauty in a general solution.  -- Eric
>> 
>> On Tuesday, February 2, 2016 at 1:15:17 AM UTC-8, Odd Möller wrote:
>> Hi Eric, have you looked at https://github.com/hseeberger/constructr 
>> <https://github.com/hseeberger/constructr>?
>> 
>> Greetings
>> Odd
>> 
>> On Tue, Feb 2, 2016 at 9:26 AM, Roland Kuhn > wrote:
>> Yes, this sounds like a good strategy to me.
>> 
>> Regards,
>> 
>> Roland
>> 
>>> 1 feb 2016 kl. 23:56 skrev Eric Swenson >:
>>> 
>>> Probably a better roll-my-own solution would involve querying AWS/ECS 
>>> dynamically to discover the IP addresses of the containers running in the 
>>> cluster, excluding the IP address of node making the query, and setting the 
>>> seed nodes to the remaining nodes. This way, the likely of getting the list 
>>> of nodes out-of-sync with reality is minimal.  — Eric
>>> 
>>>> On Feb 1, 2016, at 2:43 PM, Eric Swenson > wrote:
>>>> 
>>>> Are there any good solutions for deploying akka clustered applications in 
>>>> docker containers running under AWS/ECS?  While there are proposed 
>>>> solutions for AWS/EC2 with autoscaling, none of them targets AWS/ECS.  In 
>>>> a nutshell, the issue is this:
>>>> 
>>>> If your cluster-nodes are implemented as docker containers, and formalized 
>>>> as ECS tasks that are members of an ECS service (in an ECS cluster), then 
>>>> when they are started and stopped (by ECS) in response to load, their IP 
>>>> addresses are impossible to predict. It is not possible to configure a 
>>>> fixed set of seed nodes at known IP addresses unless these seed nodes are 
>>>> managed independently of ECS automatic management. And, of course, having 
>>>> fixed, known seed nodes becomes problematic if/when these nodes fail.  
>>>> 
>>>> I could roll my own solution using a key/value store like redis, where 
>>>> whenever a node comes up, it first publishes its IP addresses (and port, 
>>>> if not common) to the key/value store, and then queries that store to 
>>>> configure its seed nodes. Care would have to be taken to clean up old 
>>>> entries when nodes fail, of course. But it seems like I shouldn't be the 
>>>> first person to run into akka-cluster issues on AWS/ECS and that other 
>>>> solutions might be out there. Unfortunately, I've found none.
>>>> 
>>>> Any ideas?
>>>> 
>>>> -- Eric
>>>> 
>>>> 
>>>> -- 
>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/ <http://akka.io/docs/>
>>>> >>>>>>>>>> Check the FAQ: 
>>>> >>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html 
>>>> >>>>>>>>>> <http://doc.akka.io/docs/akka/current/additional/faq.html>
>>>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user 
>>>> >>>>>>>>>> <https://groups.google.com/group/akka-user>
>>>> --- 
>>>> You received this mess

[akka-user] akka-persistence and serialization

2016-02-04 Thread Eric Swenson
Since upgrading to akka 2.4.2-RC2, I’m seeing the following warning on startup:

background log: info: [WARN] [02/04/2016 15:21:46.414] 
[ClusterSystem-akka.actor.default-dispatcher-25] 
[akka.serialization.Serialization(akka://ClusterSystem)] Using the default Java 
serializer for class 
[org.xxx.eim.ExperimentInstance$RegisterProxySucceededEvent] which is not 
recommended because of performance implications. Use another serializer or 
disable this warning using the setting 
'akka.actor.warn-about-java-serializer-usage'

I’ve looked at the documentation here:  
http://doc.akka.io/docs/akka/snapshot/java/serialization.html 
 but can find no 
documentation on what serialization implementations are recommended or what 
factors to consider when choosing a serialization implementation. I’m not 
really keen on having to write my own serializer, nor in adding serialization 
methods to each of the objects used in akka-persistence. What are the 
recommendations here?  

— Eric

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Can't get akka clustering to work in docker

2016-03-18 Thread Eric Swenson
Well, I may be able to answer my own question. It absolutely does matter that 
the new remote cluster system is using the same akka-persistence (cassandra 
keyspace) store as the old (local) one.  When I changed the cassandra keyspace 
to a new one, everything started working.

So the question now is this: Do I have to have a separate akka-persistence 
journal and snapshot store for every node in the cluster?  This is very 
inconvenient, as it means I have to make up keyspace names that are somehow 
tied to each individual node.  I guess I can add the ip address of the Docker 
host to the keyspace name, but this isn’t terribly resilient.  Why does 
akka-persistence care? The journal should reflect events that apply to all 
nodes. If a node goes down (getting a new address) and a new one takes its 
place, it should be able to recover all the events from the old node.  There 
must be something else at play here.

Help!  

— Eric

> On Mar 18, 2016, at 7:17 PM, Eric Swenson  wrote:
> 
> One more thing to add to this, in case it is relevant.  I see multiple of 
> these messages in the log:
> 
> [akka://ClusterSystem/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator
>  
> ]
>  stopped
> 
> 
> First, why is it stopping (or why does it stop, in general), and secondly, is 
> it significant that the url prefix is akka://ClusterSystem/ 
>  rather than akka.tcp://ClusterSystem@10.0.3.170:2552/ 
> 
> 
> And second, I assume it makes no difference that I’m using the same 
> akka-persistence journal/snapshot store as I used when the app was binding to 
> 127.0.0.1:2552.  I get tons of log messages indicating that receiveRecover is 
> not happy trying to recover shards associated with 
> akka.tcp://ClusterSystem@127.0.0.1:2552/system/sharing/ExperimentInstance#-396422686
>  
> .
>   I’m assuming this is expected and that akka persistence should be able to 
> deal with this case. It should fail to recover these and then carry on with 
> new persistence events that are targeted to the new ClusterSystem on the new 
> IP address.
> 
> Examples of the rejected receiveRecover messages that I’m seeing are:
> 
> [akka.tcp://ClusterSystem@10.0.3.170:2552/system/cassandra-journal 
> ] Starting 
> message scan from 1
> [DEBUG] [03/19/2016 02:09:02.712] 
> [ClusterSystem-akka.actor.default-dispatcher-18] 
> [akka.tcp://ClusterSystem@10.0.3.170:2552/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator]
>  
> 
>  receiveRecover 
> ShardRegionRegistered(Actor[akka.tcp://ClusterSystem@127.0.0.1:2552/system/sharding/ExperimentInstance#-396422686
>  
> ])
> 
> — Eric
> 
>> On Mar 18, 2016, at 6:54 PM, Eric Swenson > <mailto:e...@swenson.org>> wrote:
>> 
>> I’ve been unsuccessful in trying to get an akka-cluster application that 
>> works fine with one instance to work when there are multiple members of the 
>> clusters.  A bit of background is in order:
>> 
>> 1) the application is an akka-cluster-sharing application
>> 2) it runs in a docker container
>> 3) the cluster is comprised of multiple docker hosts, each running the akka 
>> application
>> 4) the error I’m getting is this:
>> 
>> [WARN] [03/19/2016 01:39:18.086] 
>> [ClusterSystem-akka.actor.default-dispatcher-3] 
>> [akka.tcp://ClusterSystem@10.0.3.170:2552/system/sharding/ExperimentInstance 
>> ]
>>  Trying to register to coordinator at 
>> [Some(ActorSelection[Anchor(akka://ClusterSystem/ ), 
>> Path(/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator)])],
>>  but no acknowledgement. Total [1] buffered messages.
>> [WARN] [03/19/2016 01:39:20.086] 
>> [ClusterSystem-akka.actor.default-dispatcher-3] 
>> [akka.tcp://ClusterSystem@10.0.3.170:2552/system/sharding/ExperimentInstance 
>> ]
>>  Trying to register to coordinator at 
>> [Some(ActorSelection[Anchor(akka://ClusterSystem/ ), 
>> Path(/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator)])],
>>  but no acknowledgement. Total [1] buffered messages.
>> 
>> 5) the warning message is logged repeatedly and the cluster never 
>> initializes.
>> 6) I’ve set the following config parameters:
>>  akka.remote.netty.tcp.hostname: to the actual host ip address (the one 
>> accessible from all the other docker hosts)
>>  akka.remote.netty.tcp.bind-hostname: to 0.0.0.0 (so that it binds on 
>> the docker0 interface, on the ip address of the container)
>>  akka.remote.netty.tcp.port: 2552
>>  akka.remote.netty.tcp.bind-port:2552
>> 7) when I start the container, I map port 2552 on the host to port 2552 on 
>> the container.
>> 8) from the host, I’m able to do a “telnet  2

[akka-user] Re: Can't get akka clustering to work in docker

2016-03-18 Thread Eric Swenson
One more thing to add to this, in case it is relevant.  I see multiple of these 
messages in the log:

[akka://ClusterSystem/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator]
 stopped


First, why is it stopping (or why does it stop, in general), and secondly, is 
it significant that the url prefix is akka://ClusterSystem/ rather than 
akka.tcp://ClusterSystem@10.0.3.170:2552/

And second, I assume it makes no difference that I’m using the same 
akka-persistence journal/snapshot store as I used when the app was binding to 
127.0.0.1:2552.  I get tons of log messages indicating that receiveRecover is 
not happy trying to recover shards associated with 
akka.tcp://ClusterSystem@127.0.0.1:2552/system/sharing/ExperimentInstance#-396422686
 
.
  I’m assuming this is expected and that akka persistence should be able to 
deal with this case. It should fail to recover these and then carry on with new 
persistence events that are targeted to the new ClusterSystem on the new IP 
address.

Examples of the rejected receiveRecover messages that I’m seeing are:

[akka.tcp://ClusterSystem@10.0.3.170:2552/system/cassandra-journal] Starting 
message scan from 1
[DEBUG] [03/19/2016 02:09:02.712] 
[ClusterSystem-akka.actor.default-dispatcher-18] 
[akka.tcp://ClusterSystem@10.0.3.170:2552/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator]
 receiveRecover 
ShardRegionRegistered(Actor[akka.tcp://ClusterSystem@127.0.0.1:2552/system/sharding/ExperimentInstance#-396422686])

— Eric

> On Mar 18, 2016, at 6:54 PM, Eric Swenson  wrote:
> 
> I’ve been unsuccessful in trying to get an akka-cluster application that 
> works fine with one instance to work when there are multiple members of the 
> clusters.  A bit of background is in order:
> 
> 1) the application is an akka-cluster-sharing application
> 2) it runs in a docker container
> 3) the cluster is comprised of multiple docker hosts, each running the akka 
> application
> 4) the error I’m getting is this:
> 
> [WARN] [03/19/2016 01:39:18.086] 
> [ClusterSystem-akka.actor.default-dispatcher-3] 
> [akka.tcp://ClusterSystem@10.0.3.170:2552/system/sharding/ExperimentInstance 
> ]
>  Trying to register to coordinator at 
> [Some(ActorSelection[Anchor(akka://ClusterSystem/ ), 
> Path(/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator)])],
>  but no acknowledgement. Total [1] buffered messages.
> [WARN] [03/19/2016 01:39:20.086] 
> [ClusterSystem-akka.actor.default-dispatcher-3] 
> [akka.tcp://ClusterSystem@10.0.3.170:2552/system/sharding/ExperimentInstance 
> ]
>  Trying to register to coordinator at 
> [Some(ActorSelection[Anchor(akka://ClusterSystem/ ), 
> Path(/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator)])],
>  but no acknowledgement. Total [1] buffered messages.
> 
> 5) the warning message is logged repeatedly and the cluster never initializes.
> 6) I’ve set the following config parameters:
>   akka.remote.netty.tcp.hostname: to the actual host ip address (the one 
> accessible from all the other docker hosts)
>   akka.remote.netty.tcp.bind-hostname: to 0.0.0.0 (so that it binds on 
> the docker0 interface, on the ip address of the container)
>   akka.remote.netty.tcp.port: 2552
>   akka.remote.netty.tcp.bind-port:2552
> 7) when I start the container, I map port 2552 on the host to port 2552 on 
> the container.
> 8) from the host, I’m able to do a “telnet  2552” so I 
> should be taking to the akka-remoting handler.
> 9) I’m setting the akka..cluster.seed-nodes to a list of one element:  
> akka.tcp://ClusterSystem@:2552 
> :2552>.  I’m doing that 
> because, as far as I know, this seed-node list is advertised to all the other 
> members of the cluster (currently just the one) and must be accessible to 
> them all.  The Docker container ip address (seen by the docker container) is 
> on the private docker0 interface, which is not accessible from the outside 
> (from outside the host).
> 
> Now, before I tried all this, I set seed-nodes to a list of a single 
> aka.tcp://ClusterSystem@127.0.0.1:2552 
>  entry, and  left 
> akka.remote.netty.tcp.hostname, bind-hostname, port, and bind-port to their 
> default values.  In this configuration, the cluster comes up perfectly fine 
> and the application (with an akka-http frontend and a cluster sharing 
> backend) works perfectly.  In fact, when I use two seed nodes of the same 
> form, but different ports on the same local 127.0.0.1 host, and two instances 
> of the app (each binding to the different ports), the app works fine too.  
> The cluster comes up find and each node finds the other.
> 
> But in a real environment, the multiple instances will be on different nodes 
> (diffferent ip addresses), and in my case, as docker containers on different 
&

[akka-user] Can't get akka clustering to work in docker

2016-03-18 Thread Eric Swenson
I’ve been unsuccessful in trying to get an akka-cluster application that works 
fine with one instance to work when there are multiple members of the clusters. 
 A bit of background is in order:

1) the application is an akka-cluster-sharing application
2) it runs in a docker container
3) the cluster is comprised of multiple docker hosts, each running the akka 
application
4) the error I’m getting is this:

[WARN] [03/19/2016 01:39:18.086] 
[ClusterSystem-akka.actor.default-dispatcher-3] 
[akka.tcp://ClusterSystem@10.0.3.170:2552/system/sharding/ExperimentInstance] 
Trying to register to coordinator at 
[Some(ActorSelection[Anchor(akka://ClusterSystem/), 
Path(/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator)])], 
but no acknowledgement. Total [1] buffered messages.
[WARN] [03/19/2016 01:39:20.086] 
[ClusterSystem-akka.actor.default-dispatcher-3] 
[akka.tcp://ClusterSystem@10.0.3.170:2552/system/sharding/ExperimentInstance] 
Trying to register to coordinator at 
[Some(ActorSelection[Anchor(akka://ClusterSystem/), 
Path(/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator)])], 
but no acknowledgement. Total [1] buffered messages.

5) the warning message is logged repeatedly and the cluster never initializes.
6) I’ve set the following config parameters:
akka.remote.netty.tcp.hostname: to the actual host ip address (the one 
accessible from all the other docker hosts)
akka.remote.netty.tcp.bind-hostname: to 0.0.0.0 (so that it binds on 
the docker0 interface, on the ip address of the container)
akka.remote.netty.tcp.port: 2552
akka.remote.netty.tcp.bind-port:2552
7) when I start the container, I map port 2552 on the host to port 2552 on the 
container.
8) from the host, I’m able to do a “telnet  2552” so I 
should be taking to the akka-remoting handler.
9) I’m setting the akka..cluster.seed-nodes to a list of one element:  
akka.tcp://ClusterSystem@:2552.  I’m doing that because, as 
far as I know, this seed-node list is advertised to all the other members of 
the cluster (currently just the one) and must be accessible to them all.  The 
Docker container ip address (seen by the docker container) is on the private 
docker0 interface, which is not accessible from the outside (from outside the 
host).

Now, before I tried all this, I set seed-nodes to a list of a single 
aka.tcp://ClusterSystem@127.0.0.1:2552 entry, and  left 
akka.remote.netty.tcp.hostname, bind-hostname, port, and bind-port to their 
default values.  In this configuration, the cluster comes up perfectly fine and 
the application (with an akka-http frontend and a cluster sharing backend) 
works perfectly.  In fact, when I use two seed nodes of the same form, but 
different ports on the same local 127.0.0.1 host, and two instances of the app 
(each binding to the different ports), the app works fine too.  The cluster 
comes up find and each node finds the other.

But in a real environment, the multiple instances will be on different nodes 
(diffferent ip addresses), and in my case, as docker containers on different 
docker hosts.  So clearly, the seed nodes must have externally accessible ip 
addresses.  

Can anyone shed any light on what might be going wrong?  

How might I debug this?  Thank. — Eric



 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Can't get akka clustering to work in docker

2016-03-22 Thread Eric Swenson
Hi Endre,

I have read that part of the documentation, and after switching to a new 
cassandra journal keyspace, everything is working as it should be. I've 
confirmed that if I bump the "ECS Service" count to 2 (or greater), the ECS 
Task (describing an akka-cluster-sharding application) gets deployed on 
another ECS Container instance. With the custom seed enumeration code I 
wrote (which queries the ECS cluster/service for the IP addresses of all 
the ECS container instances) and the custom code to set 
akka.remote.netty.tcp.hostname to the IP address of the container instance 
(rather than the docker instance's docker0 ip address), the cluster members 
all see each other and all nodes join the cluster.

For the (written) record, since others will undoubtedly run into the same 
issue as I did, here are the important things you have to do to get an 
akka-cluster application working on AWS/ECS.

1) You have to dynamically configure the seed nodes to the IP addresses of 
the ECS container instances in your cluster. The way I did this was as 
follows:  I wrote a scala library using the AWS Java SDK that, given the 
ECS Cluster Arn and Service Name, enumerated the tasks for the service, and 
for those tasks, enumerates the container instances on which the tasks are 
running. Given those container instances the code determines the EC2 
instance ID of the EC2 instances hosting the container instances. And using 
the EC2 DescribeInstances API, it determines the IP address (private, in my 
case) of the EC2 instances.  Finally, the IP addresses are mapped to the 
akka.tcp URLs required to configure the seed nodes.

2) You have to dynamically configure the akka.remote.netty.tcp.hostname to 
be the IP address of the ECS container instance on which your docker 
container is running. With no customizations, akka will set this to the IP 
address on the docker0 interface, which is a private IP address not 
accessible from other akka cluster members. Since there doesn't appear to 
be a way, on AWS ECS for a docker container to determine the IP address of 
the docker host (ECS Container Instance), I "cheated".  I used the metadata 
URL that all EC2 instances support, to query the ip address 
(http://169.254.169.254/latest/meta-data/local-ipv4";.

-- Eric

On Tuesday, March 22, 2016 at 3:06:59 AM UTC-7, Akka Team wrote:
>
> Hi Eric,
>
> I have no experience with Docker at all, but it does feel wrong (unless 
> very specific use-cases) to have separate journals and snapshot stores 
> *per-node*. I think you might have an issue with Docker NAT. Have you read 
> this part of the documentation: 
> http://doc.akka.io/docs/akka/2.4.2/scala/remoting.html#Akka_behind_NAT_or_in_a_Docker_container
>
> -Endre
>
> On Sat, Mar 19, 2016 at 3:27 AM, Eric Swenson  > wrote:
>
>> Well, I may be able to answer my own question. It absolutely does matter 
>> that the new remote cluster system is using the same akka-persistence 
>> (cassandra keyspace) store as the old (local) one.  When I changed the 
>> cassandra keyspace to a new one, everything started working.
>>
>> So the question now is this: Do I have to have a separate 
>> akka-persistence journal and snapshot store for every node in the cluster?  
>> This is very inconvenient, as it means I have to make up keyspace names 
>> that are somehow tied to each individual node.  I guess I can add the ip 
>> address of the Docker host to the keyspace name, but this isn’t terribly 
>> resilient.  Why does akka-persistence care? The journal should reflect 
>> events that apply to all nodes. If a node goes down (getting a new address) 
>> and a new one takes its place, it should be able to recover all the events 
>> from the old node.  There must be something else at play here.
>>
>> Help!  
>>
>> — Eric
>>
>> On Mar 18, 2016, at 7:17 PM, Eric Swenson > > wrote:
>>
>> One more thing to add to this, in case it is relevant.  I see multiple of 
>> these messages in the log:
>>
>> [
>> akka://ClusterSystem/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator]
>>  
>> stopped
>>
>>
>> First, why is it stopping (or why does it stop, in general), and 
>> secondly, is it significant that the url prefix is akka://ClusterSystem/ 
>> rather than akka.tcp://ClusterSystem@10.0.3.170:2552/
>>
>> And second, I assume it makes no difference that I’m using the same 
>> akka-persistence journal/snapshot store as I used when the app was binding 
>> to 127.0.0.1:2552.  I get tons of log messages indicating that 
>> receiveRecover is not happy trying to recover shards associated with 
>> akka.tcp://ClusterSystem@127.0.0.1:2552/system/sharing/ExperimentInstance#-396422686.
>>

[akka-user] akka-http Http.outgoingConnectionHttps and self-signed certs

2016-05-17 Thread Eric Swenson
I have a need (no, not in production) to have an akka-based service contact 
another service using TLS where the remote service is using a self-signed 
cert.

I've used AkkaSSLConfig to configure the "loose" settings:

val looseConfig = SSLLooseConfig().withAcceptAnyCertificate(true).
  withDisableHostnameVerification(true).
  withAllowLegacyHelloMessages(Some(true)).
  withAllowUnsafeRenegotiation(Some(true)).
  withAllowWeakCiphers(true).
  withAllowWeakProtocols(true).
  withDisableSNI(true)


and despite trying all of the, still get the following exception when 
trying to access the remote service:

 sun.security.validator.ValidatorException: PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target


It was my impression that the loose config:


withAcceptAnyCertificate(true)


should have prevented the TLS implementation from trying to verify the 
cert. 


What am I missing?  What the the correct way to accept self-signed certs 
using akka-http's Http() client?


-- Eric





-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka-http Http.outgoingConnectionHttps and self-signed certs

2016-05-17 Thread Eric Swenson
I don't want or need to configure a specific trust anchor. I want to be able to 
do the equivalent of "curl -k" on a set of local servers, with different 
signing certs. I would have thought the loose "acceptAnyCertificate" would have 
been precisely for this.  What does that setting do?

If the only way to allow self-signed certs is through setting up a trust store, 
I can do that.


-- Eric


> On May 17, 2016, at 16:30, Konrad Malawski  
> wrote:
> 
> Have you attempted to "do the right thing" ™?
> Which is to add the certificate to a trust store, instead of disabling all 
> TLS features?
> 
> It's actually not that hard and documented here: 
> http://typesafehub.github.io/ssl-config/CertificateGeneration.html
> 
> Also, you can always drop down to the raw low level Java APIs, as this 
> example shows: 
> https://github.com/akka/akka/blob/master/akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java
> (it's server side, but the same process can be done for client – pretty much)
> 
> -- 
> Konrad `ktoso` Malawski
> Akka @ Lightbend
> 
>> On 18 May 2016 at 01:25:32, Eric Swenson (e...@swenson.org) wrote:
>> 
>> I have a need (no, not in production) to have an akka-based service contact 
>> another service using TLS where the remote service is using a self-signed 
>> cert.
>> 
>> I've used AkkaSSLConfig to configure the "loose" settings:
>> 
>> val looseConfig = SSLLooseConfig().withAcceptAnyCertificate(true).
>>   withDisableHostnameVerification(true).
>>   withAllowLegacyHelloMessages(Some(true)).
>>   withAllowUnsafeRenegotiation(Some(true)).
>>   withAllowWeakCiphers(true).
>>   withAllowWeakProtocols(true).
>>   withDisableSNI(true)
>> 
>> and despite trying all of the, still get the following exception when trying 
>> to access the remote service:
>> 
>>  sun.security.validator.ValidatorException: PKIX path building failed: 
>> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
>> valid certification path to requested target
>> 
>> 
>> 
>> It was my impression that the loose config:
>> 
>> 
>> 
>> withAcceptAnyCertificate(true)
>> 
>> 
>> 
>> should have prevented the TLS implementation from trying to verify the cert. 
>> 
>> 
>> 
>> What am I missing?  What the the correct way to accept self-signed certs 
>> using akka-http's Http() client?
>> 
>> 
>> 
>> -- Eric
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> >>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka-http Http.outgoingConnectionHttps and self-signed certs

2016-05-18 Thread Eric Swenson
Apart from my prior point — that it is not practical for my test environment to 
configure all the trust anchors (self signed cert signer), I decided to try it 
anyhow for a single self-signed cert. I still am having issues: here is the 
code:

val trustStoreConfig = TrustStoreConfig(None, 
Some("/Users/eswenson/self-signed-cert.jks"))
  val trustManagerConfig = 
TrustManagerConfig().withTrustStoreConfigs(List(trustStoreConfig))

  val looseConfig = SSLLooseConfig().withAcceptAnyCertificate(true).
withDisableHostnameVerification(true).
withAllowLegacyHelloMessages(Some(true)).
withAllowUnsafeRenegotiation(Some(true)).
withAllowWeakCiphers(true).
withAllowWeakProtocols(true).
withDisableSNI(true)

  val sslConfig = AkkaSSLConfig().mapSettings(s =>
 s.withLoose(looseConfig).withTrustManagerConfig(trustManagerConfig)
  )

  val connectionContext = Http().createClientHttpsContext(sslConfig)

  lazy val connectionFlow: Flow[HttpRequest, HttpResponse, Any] =
Http().outgoingConnectionHttps(host, port, connectionContext)

  def httpSRequest(request: HttpRequest): Future[HttpResponse] =
Source.single(request).via(connectionFlow).runWith(Sink.head)

As you can see, I’m using a trust store with the self-signed cert in it.  Even 
with the trust store and enabling all the loose config options (I tried it 
without looseConfig to no avail), I’m still getting errors:

background log: info: [INFO] [05/17/2016 18:49:17.574] 
[ClusterSystem-akka.actor.default-dispatcher-25] 
[ExperimentInstance(akka://ClusterSystem)] fetchExperiment: 
exception=akka.stream.ConnectionException: Hostname verification failed! 
Expected session to be for 
xxx-GfsElb-1RLMB4EAK0HUM-785838730.us-west-2.elb.amazonaws.com
background log: error: akka.stream.ConnectionException: Hostname verification 
failed! Expected session to be for 
xxx-GfsElElb-1RLMB4EAK0HUM-785838730.us-west-2.elb.amazonaws.com 
<http://xxx-gfselelb-1rlmb4eak0hum-785838730.us-west-2.elb.amazonaws.com/>

Why is it doing any host name verification?  The loose config specifies:

 withDisableHostnameVerification(true)

I’m finding it hard to believe it is this hard to do HTTPS with self-signed 
certs.  Any suggestions?

— Eric

> On May 17, 2016, at 5:11 PM, Eric Swenson  wrote:
> 
> I don't want or need to configure a specific trust anchor. I want to be able 
> to do the equivalent of "curl -k" on a set of local servers, with different 
> signing certs. I would have thought the loose "acceptAnyCertificate" would 
> have been precisely for this.  What does that setting do?
> 
> If the only way to allow self-signed certs is through setting up a trust 
> store, I can do that.
> 
> 
> -- Eric
> 
> 
> On May 17, 2016, at 16:30, Konrad Malawski  <mailto:konrad.malaw...@lightbend.com>> wrote:
> 
>> Have you attempted to "do the right thing" ™?
>> Which is to add the certificate to a trust store, instead of disabling all 
>> TLS features?
>> 
>> It's actually not that hard and documented here: 
>> http://typesafehub.github.io/ssl-config/CertificateGeneration.html 
>> <http://typesafehub.github.io/ssl-config/CertificateGeneration.html>
>> 
>> Also, you can always drop down to the raw low level Java APIs, as this 
>> example shows: 
>> https://github.com/akka/akka/blob/master/akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java
>>  
>> <https://github.com/akka/akka/blob/master/akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java>
>> (it's server side, but the same process can be done for client – pretty much)
>> 
>> -- 
>> Konrad `ktoso` Malawski
>> Akka <http://akka.io/> @ Lightbend <http://lightbend.com/>
>> On 18 May 2016 at 01:25:32, Eric Swenson (e...@swenson.org 
>> <mailto:e...@swenson.org>) wrote:
>> 
>>> I have a need (no, not in production) to have an akka-based service contact 
>>> another service using TLS where the remote service is using a self-signed 
>>> cert.
>>> 
>>> I've used AkkaSSLConfig to configure the "loose" settings:
>>> 
>>> val looseConfig = SSLLooseConfig().withAcceptAnyCertificate(true).
>>>   withDisableHostnameVerification(true).
>>>   withAllowLegacyHelloMessages(Some(true)).
>>>   withAllowUnsafeRenegotiation(Some(true)).
>>>   withAllowWeakCiphers(true).
>>>   withAllowWeakProtocols(true).
>>>   withDisableSNI(true)
>>> 
>>> and despite trying all of the, still get the following exception when 
>>> trying to access the remote service:
>>> 
>>>  sun.security.validator.ValidatorException: 

[akka-user] akka-http web socket issue

2016-07-06 Thread Eric Swenson
I have a web service, implemented in akka-http, to which I’ve added web socket 
support.  I have an akka-http client that is able to connect to the service and 
receive a single message, thereupon the web socket is, for some reason, 
disconnected.  I do not understand why it becomes disconnected. There is no 
intentional code on the server side that should disconnect the web socket.  On 
the client, I’m using virtually identical to the code found here:

http://doc.akka.io/docs/akka/2.4.7/scala/http/client-side/websocket-support.html#websocketclientflow
 


I’m finding that this code:

val connected = upgradeResponse.flatMap { upgrade =>
  if (upgrade.response.status == StatusCodes.OK) {
Future.successful(Done)
  } else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
  }
}

upon connection, raises the above RuntimeException.  The value of 
upgrade.response.status is "101 Switching Protocols”. I do receive one message 
from server-to-client and displayed by the client before the “closed” Future is 
completed:

val (upgradeResponse, closed) =
  outgoing
.viaMat(webSocketFlow)(Keep.right) // keep the materialized 
Future[WebSocketUpgradeResponse]
.toMat(incoming)(Keep.both) // also keep the Future[Done]
.run()
When the closed Future completes, this code is executed:

closed.foreach(_ => println("closed"))
and I see this “closed” message displayed.  However, before it is displayed, I 
do get a message sent by the service to the client.

My questions:  1) isn’t the “101 Switching Protocols” status expected?  Why is 
that not handled in the if clause of the upgrade response handling logic?  I 
did change the conditional to:

if (upgrade.response.status == StatusCodes.OK || upgrade.response.status == 
StatusCodes.SwitchingProtocols)
But while that prevents the runtime exception, the connection still 
automatically closes after receiving the first message.

On the server side, I have logic that looks like this:

class WebSocketConnection(experimentInstanceId: String, httpActorSystem: 
ActorSystem, clusterActorSystem: ActorSystem) {
  
  private[this] val experimentInstanceRegion = 
ClusterSharding(clusterActorSystem).shardRegion(ExperimentInstance.shardName)

  def websocketFlow: Flow[Message, Message, _] =
Flow.fromGraph(
  GraphDSL.create(Source.actorRef[WsMessage](bufferSize = 5, 
OverflowStrategy.fail)) { implicit builder =>
wsMessageSource => //source provided as argument

  // flow used as input from web socket
  val fromWebsocket = builder.add(
Flow[Message].collect {
  case TextMessage.Strict(txt) =>
println(s"txt=$txt")
WsIncomingMessage(experimentInstanceId, txt)
})

  // flow used as output, it returns Messages
  val backToWebsocket = builder.add(
Flow[WsMessage].map {
  case WsMessage(author, text) => TextMessage(s"[$author]: $text")
}
  )

  // send messages to the actor, if send also 
Disconnected(experimentInstanceId) before stream completes.
  val actorSink = 
Sink.actorRef[WebSocketEvent](experimentInstanceRegion, 
WsDisconnected(experimentInstanceId))

  // merges both pipes
  val merge = builder.add(Merge[WebSocketEvent](2))

  val actorAsSource = builder.materializedValue.map(actor => 
WsConnected(experimentInstanceId, actor))

  fromWebsocket ~> merge.in(0)

  actorAsSource ~> merge.in(1)

  merge ~> actorSink

  wsMessageSource ~> backToWebsocket

  // expose ports

  FlowShape(fromWebsocket.in, backToWebsocket.outlet)
  })

  def sendMessage(message: WsMessage): Unit = experimentInstanceRegion ! message
}
In the actor associated with the experimentInstanceRegion (e.g. a cluster 
sharing region), I see log messages indicating a WsConnected message is 
received.  And then shortly after, I see that the WsDisconnected message is 
received.  

I haven’t touched the web socket support logic in quite some time.  However, 
this all used to work — in other words, the client would stay active and 
display messages received from the server, and the server would continue 
sending messages to the client over the web socket connection until the client 
disconnected (on purpose).  This was all working in akka 2.3 and may have 
broken when I upgraded to the various 2.4 releases and is still broken using 
2.4.7.  

Any suggestions on how to debug this?  Any reason for the auto-disconnection?  
And what about the 101 Switching Protocols handling (mentioned above)?

Thanks. — Eric

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
-

[akka-user] Re: akka-http web socket issue

2016-07-06 Thread Eric Swenson
I wrote a simple python client:

$ cat ws-test.py
import os
from websocket import create_connection

eiid=os.getenv("EIID")
token=os.getenv("BEARER_TOKEN")
header="Authorization: Bearer %s" % token
ws = create_connection("ws://localhost:9000/ws-connect/%s" % eiid, 
header=[header])
while True:
x = ws.recv()
print(x)

And used it with my server.  It does not disconnect and correctly responds to 
all messages sent by the server.  So that rules out problems with the 
akka-http-based server.  it would appear that the issue is with the akka-http 
client (whose code I got from this web page:

> http://doc.akka.io/docs/akka/2.4.7/scala/http/client-side/websocket-support.html#websocketclientflow
>  
> 
Is there some reason why this code does not work?  As mentioned previously, the 
code connects, retrieves and displays one message sent by the server just after 
connection, and then promptly disconnects the websocket connection.

Are there known issues with client-side web socket support in 2.4.7?

— Eric

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka-http web socket issue

2016-07-07 Thread Eric Swenson
Thanks, Giovanni. That was precisely the problem and while I didn't see the 
article you quoted, I "fixed" the problem on my end by having my client use 
a Source.fromFuture of a promise which I didn't complete.  I'll switch to 
using the approach in the article you referenced.  Still, I think there 
will be lots of people who may use the web socket client in the 
documentation as a starting point. They will expect that the client will 
continue to display messages received from the service while the client is 
running.  I think the example should either a) document that the client 
closes after processing a single message and how to make it maintain the 
web socket connection open, or better, in my opinion, b) use Source.maybe 
to ensure the connection doesn't close.

On a related note, if the web socket server side is implemented using 
akka-http, the example still closes the connection after 1 minute due to 
the server side's not having gotten any TCP packets over the connection. 
The documentation should either a) include a description of how to change 
this timeout or b) have the client include heartbeat logic to keep the 
connection alive.  

-- Eric

On Thursday, July 7, 2016 at 3:10:54 AM UTC-7, Giovanni Alberto Caporaletti 
wrote:
>
> Are you keeping the outbound client flow open? 
>
>
> http://doc.akka.io/docs/akka/2.4.7/scala/http/client-side/websocket-support.html#Half-Closed_WebSockets
>
>
>>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: from akka.persistence.journal.ReplayFilter Invalid replayed event [1] in buffer from old writer

2016-08-02 Thread Eric Swenson
I'm getting this error consistently now, and don't know why this is 
happening nor what to do about it.  I form the persistentId this way:

override def persistenceId: String = self.path.parent.parent.name + "-" 
+ self.path.name

So I don't see how I could have two persisters with the same id.  I'm 
unable to bring up my akka cluster due to this error.

Any suggestions?  I'm running akka 2.4.8 with akka.persistence:

journal.plugin = "cassandra-journal"
snapshot-store.plugin = "cassandra-snapshot-store"


On Monday, April 25, 2016 at 3:34:21 AM UTC-7, Tim Pigden wrote:
>
> Hi
> I'm getting this message. I'm probably doing something wrong but any idea 
> what that might be? I know what messages I'm persisting and this particular 
> test is one in which I kill off my persistor and restart it.
> Or does it indicate the message is failing to deserialize or something 
> like that
>
> 2016-04-25 10:33:47,570 - ERROR - from 
> com.optrak.opkakka.ddd.persistence.SimplePersistor Persistence failure when 
> replaying events for persistenceId [shd-matrix-testId]. Last known sequence 
> number [0] 
> java.lang.IllegalStateException: Invalid replayed event [1] in buffer from 
> old writer [f6bd09c4-1f1c-4710-8cf6-c6f0776f39d3] with persistenceId 
> [shd-matrix-testId]
>at 
> akka.persistence.journal.ReplayFilter$$anonfun$receive$1.applyOrElse(ReplayFilter.scala:125)
>at akka.actor.Actor$class.aroundReceive(Actor.scala:480)
>at 
> akka.persistence.journal.ReplayFilter.aroundReceive(ReplayFilter.scala:50)
>
>
>
> Any suggests much appreciated!
>
>
>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-03 Thread Eric Swenson
We have an akka-cluster/sharding application deployed an AWS/ECS, where 
each instance of the application is a Docker container.  An ECS service 
launches N instances of the application based on configuration data.  It is 
not possible to know, for certain, the IP addresses of the cluster members. 
 Upon startup, before the AkkaSystem is created, the code currently polls 
AWS and determines the IP addresses of all the Docker hosts (which 
potentially could run the akka application).  It sets these IP addresses as 
the seed nodes before bringing up the akka cluster system. The 
configuration for these has, up until yesterday always included the 
akka.cluster.auto-down-unreachable-after configuration setting.  And it has 
always worked.  Furthermore, it supports two very critical requirements:

a) an instance of the application can be removed at any time, due to 
scaling or rolling updates
b) an instance of the application can be added at any time, due to scaling 
or rolling updates

On the advice of an Akka expert on the Gitter channel, I removed the 
auto-down-unreachable-after setting, which, as documented, is dangerous for 
production.  As a result the system no longer supports rolling updates.  A 
rolling update occurs thus:  a new version of the application is deployed 
(a new ECS task definition is created with a new Docker image).  The ECS 
service launches a new task (Docker container running on an available host) 
and once that container becomes stable, it kills one of the remaining 
instances (cluster members) to bring the number of instances to some 
configured value.  

When this happens, akka-cluster becomes very unhappy and becomes 
unresponsive.  Without the auto-down-unreachable-after setting, it keeps 
trying to talk to the old cluster members. which is no longer present.  It 
appears to NOT recover from this.  There is a constant barrage of messages 
of the form:

[DEBUG] [08/04/2016 00:19:27.126] 
[ClusterSystem-cassandra-plugin-default-dispatcher-27] 
[akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
sequence [/system/sharding/ExperimentInstance#-389574371] failed
[DEBUG] [08/04/2016 00:19:27.140] 
[ClusterSystem-cassandra-plugin-default-dispatcher-27] 
[akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
sequence [/system/sharding/ExperimentInstance#-389574371] failed
[DEBUG] [08/04/2016 00:19:27.142] 
[ClusterSystem-cassandra-plugin-default-dispatcher-27] 
[akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
sequence [/system/sharding/ExperimentInstance#-389574371] failed
[DEBUG] [08/04/2016 00:19:27.143] 
[ClusterSystem-cassandra-plugin-default-dispatcher-27] 
[akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
sequence [/system/sharding/ExperimentInstance#-389574371] failed
[DEBUG] [08/04/2016 00:19:27.143] 
[ClusterSystem-cassandra-plugin-default-dispatcher-27] 
[akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
sequence [/system/sharding/ExperimentInstance#-389574371] failed

and of the form:

[WARN] [08/04/2016 00:19:16.787] 
[ClusterSystem-akka.actor.default-dispatcher-9] 
[akka.tcp://ClusterSystem@10.0.3.103:2552/system/sharding/ExperimentInstance] 
Retry request for shard [5] homes from coordinator at 
[Actor[akka.tcp://ClusterSystem@10.0.3.100:2552/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator#1679517511]].
 
[1] buffered messages.
[WARN] [08/04/2016 00:19:18.787] 
[ClusterSystem-akka.actor.default-dispatcher-9] 
[akka.tcp://ClusterSystem@10.0.3.103:2552/system/sharding/ExperimentInstance] 
Retry request for shard [23] homes from coordinator at 
[Actor[akka.tcp://ClusterSystem@10.0.3.100:2552/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator#1679517511]].
 
[1] buffered messages.
[WARN] [08/04/2016 00:19:18.787] 
[ClusterSystem-akka.actor.default-dispatcher-9] 
[akka.tcp://ClusterSystem@10.0.3.103:2552/system/sharding/ExperimentInstance] 
Retry request for shard [1] homes from coordinator at 
[Actor[akka.tcp://ClusterSystem@10.0.3.100:2552/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator#1679517511]].
 
[1] buffered messages.
[WARN] [08/04/2016 00:19:18.787] 
[ClusterSystem-akka.actor.default-dispatcher-9] 
[akka.tcp://ClusterSystem@10.0.3.103:2552/system/sharding/ExperimentInstance] 
Retry request for shard [14] homes from coordinator at 
[Actor[akka.tcp://ClusterSystem@10.0.3.100:2552/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator#1679517511]].
 
[1] buffered messages.
[WARN] [08/04/2016 00:19:18.787] 
[ClusterSystem-akka.actor.default-dispatcher-9] 
[akka.tcp://ClusterSystem@10.0.3.103:2552/system/sharding/ExperimentInstance] 
Retry request for shard [5] homes from coordinator at 
[Actor[akka.tcp://ClusterSystem@10.0.3.100:2552/system/sharding/ExperimentInstanceCoordinator/singleton/coordinator#1679517511]].
 
[1] buffered messages.

and then a message like this:

[WARN] [08/03/2016 23:

Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-04 Thread Eric Swenson
Thanks very much, Justin.  I appreciate your suggested approach and will 
implement something along those lines.  In summary, I believe, I should do 
the following:

1) handle notifications pf nodes going offline in my application
2) query AWS/ECS to see if the node is *really* supposed to be offline 
(meaning that it has been removed for autoscaling or replacement reasons),
3) if yes, then manually down the node

This makes perfect sense. The philosophy of going to the single source of 
truth about the state of the cluster (in this case, AWS/ECS) seems to be 
apt here.

Thanks again.  -- Eric

On Wednesday, August 3, 2016 at 7:00:42 PM UTC-7, Justin du coeur wrote:
>
> The keyword here is "auto".  Autodowning is an *incredibly braindead* 
> algorithm for dealing with nodes coming out of service, and if you use it 
> in production you more or less guarantee disaster, because that algorithm 
> can't cope with cluster partition.  You *do* need to deal with downing, but 
> you have to get something smarter than that.
>
> Frankly, if you're already hooking into AWS, I *suspect* the best approach 
> is to leverage that -- when a node goes offline, you have some code to 
> detect that through the ECS APIs, react to it, and manually down that node. 
>  (I'm planning on something along those lines for my system, but haven't 
> actually tried yet.)  But whether you do that or something else, you've got 
> to add *something* that does downing.
>
> I believe the official party line is "Buy a Lightbend Subscription", 
> through which you can get their Split Brain Resolver, which is a fairly 
> battle-hardened module for dealing with this problem.  That's not strictly 
> necessary, but you *do* need to have a reliable solution...
>
> On Wed, Aug 3, 2016 at 8:42 PM, Eric Swenson  > wrote:
>
>> We have an akka-cluster/sharding application deployed an AWS/ECS, where 
>> each instance of the application is a Docker container.  An ECS service 
>> launches N instances of the application based on configuration data.  It is 
>> not possible to know, for certain, the IP addresses of the cluster 
>> members.  Upon startup, before the AkkaSystem is created, the code 
>> currently polls AWS and determines the IP addresses of all the Docker hosts 
>> (which potentially could run the akka application).  It sets these IP 
>> addresses as the seed nodes before bringing up the akka cluster system. The 
>> configuration for these has, up until yesterday always included the 
>> akka.cluster.auto-down-unreachable-after configuration setting.  And it has 
>> always worked.  Furthermore, it supports two very critical requirements:
>>
>> a) an instance of the application can be removed at any time, due to 
>> scaling or rolling updates
>> b) an instance of the application can be added at any time, due to 
>> scaling or rolling updates
>>
>> On the advice of an Akka expert on the Gitter channel, I removed the 
>> auto-down-unreachable-after setting, which, as documented, is dangerous for 
>> production.  As a result the system no longer supports rolling updates.  A 
>> rolling update occurs thus:  a new version of the application is deployed 
>> (a new ECS task definition is created with a new Docker image).  The ECS 
>> service launches a new task (Docker container running on an available host) 
>> and once that container becomes stable, it kills one of the remaining 
>> instances (cluster members) to bring the number of instances to some 
>> configured value.  
>>
>> When this happens, akka-cluster becomes very unhappy and becomes 
>> unresponsive.  Without the auto-down-unreachable-after setting, it keeps 
>> trying to talk to the old cluster members. which is no longer present.  It 
>> appears to NOT recover from this.  There is a constant barrage of messages 
>> of the form:
>>
>> [DEBUG] [08/04/2016 00:19:27.126] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.140] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.142] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.143] 
>> [Clust

Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-04 Thread Eric Swenson
Thanks, Konrad. I will replace the use of auto-down with a scheme such as 
that proposed by Justin. I have also reached out to Lightbend (and received 
a call back already) regarding subscription services from Lightbend.  -- 
Eric

On Thursday, August 4, 2016 at 1:26:35 AM UTC-7, Konrad Malawski wrote:
>
> Just to re-affirm what Justin wrote there.
>
> Auto downing is "auto". It's dumb. That's why it's not safe.
> The safer automatic downing modes ones are in 
> doc.akka.io/docs/akka/rp-16s01p05/scala/split-brain-resolver.html
> Yes, that's a commercial thing.
>
> If you don't want to use these, use EC2's APIs - they have APIs from which 
> you can get information about state like that.
>
> -- 
> Konrad `ktoso` Malawski
> Akka <http://akka.io> @ Lightbend <http://lightbend.com>
>
> On 4 August 2016 at 04:00:34, Justin du coeur (jduc...@gmail.com 
> ) wrote:
>
> The keyword here is "auto".  Autodowning is an *incredibly braindead* 
> algorithm for dealing with nodes coming out of service, and if you use it 
> in production you more or less guarantee disaster, because that algorithm 
> can't cope with cluster partition.  You *do* need to deal with downing, but 
> you have to get something smarter than that. 
>
> Frankly, if you're already hooking into AWS, I *suspect* the best approach 
> is to leverage that -- when a node goes offline, you have some code to 
> detect that through the ECS APIs, react to it, and manually down that node. 
>  (I'm planning on something along those lines for my system, but haven't 
> actually tried yet.)  But whether you do that or something else, you've got 
> to add *something* that does downing.
>
> I believe the official party line is "Buy a Lightbend Subscription", 
> through which you can get their Split Brain Resolver, which is a fairly 
> battle-hardened module for dealing with this problem.  That's not strictly 
> necessary, but you *do* need to have a reliable solution...
>
> On Wed, Aug 3, 2016 at 8:42 PM, Eric Swenson  > wrote:
>
>> We have an akka-cluster/sharding application deployed an AWS/ECS, where 
>> each instance of the application is a Docker container.  An ECS service 
>> launches N instances of the application based on configuration data.  It is 
>> not possible to know, for certain, the IP addresses of the cluster 
>> members.  Upon startup, before the AkkaSystem is created, the code 
>> currently polls AWS and determines the IP addresses of all the Docker hosts 
>> (which potentially could run the akka application).  It sets these IP 
>> addresses as the seed nodes before bringing up the akka cluster system. The 
>> configuration for these has, up until yesterday always included the 
>> akka.cluster.auto-down-unreachable-after configuration setting.  And it has 
>> always worked.  Furthermore, it supports two very critical requirements: 
>>
>> a) an instance of the application can be removed at any time, due to 
>> scaling or rolling updates
>> b) an instance of the application can be added at any time, due to 
>> scaling or rolling updates
>>
>> On the advice of an Akka expert on the Gitter channel, I removed the 
>> auto-down-unreachable-after setting, which, as documented, is dangerous for 
>> production.  As a result the system no longer supports rolling updates.  A 
>> rolling update occurs thus:  a new version of the application is deployed 
>> (a new ECS task definition is created with a new Docker image).  The ECS 
>> service launches a new task (Docker container running on an available host) 
>> and once that container becomes stable, it kills one of the remaining 
>> instances (cluster members) to bring the number of instances to some 
>> configured value.  
>>
>> When this happens, akka-cluster becomes very unhappy and becomes 
>> unresponsive.  Without the auto-down-unreachable-after setting, it keeps 
>> trying to talk to the old cluster members. which is no longer present.  It 
>> appears to NOT recover from this.  There is a constant barrage of messages 
>> of the form:
>>
>> [DEBUG] [08/04/2016 00:19:27.126] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.140] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:1

Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-04 Thread Eric Swenson
While I'm in the process of implementing your proposed solution, I did want 
to make sure I understood why I'm seeing the failures I'm seeing when a 
node is taken offline, auto-down is disabled, and no one is handling the 
UnreachableNode message.  Let me try to explain what I think is happening 
and perhaps you (or someone else who knows more about this than I) can 
confirm or refute.

In the case of akka-cluster-sharding, a shard might exist on the 
unreachable node.  Since the node is not yet marked as "down", the cluster 
simply cannot handle an incoming message for that shard.  To create another 
sharded actor on an available cluster node might duplicate the unreachable 
node state.  In the case of akka-persistence actors, even though a new 
shard actor could resurrect any journaled state, we cannot be sure that the 
old unreachable node might not at any time, add other events to the 
journal, or come online and try to continue operating on the shard.

Is that the reason why I see the following behavior:  NodeA is online. 
 NodeB comes online and joins the cluster.  A request comes in from 
akka-http and is sent to the shard region.  It goes to NodeA which creates 
an actor to handle the sharded object.  NodeA is taken offline (unbeknownst 
to the akka-cluster).  Another message for the above-mentioned shard comes 
in from akka-http and is sent to the shard region. The shard region can't 
reach NodeA.  NodeA isn't marked as down.  So the shard region cannot 
create another actor (on an available Node). It can only wait (until 
timeout) for NodeA to become reachable.  Since, in my scenario, NodeA will 
never become reachable and NodeB is the only one online, all requests for 
old shards timeout.

If the above logic is true, I have one last issue:  In the above scenario, 
if a message comes into the shard region for a shard that WOULD HAVE BEEN 
allocated to NodeA but has never yet been assigned to an actor on NodeA, 
and NodeA is unreachable, why can it simply be assigned to another Node? 
 is it because the shard-to-node algorithm is fixed (by default) and there 
is no dynamic ability to "reassign" to an available Node? 

Thanks again.  -- Eric

On Wednesday, August 3, 2016 at 7:00:42 PM UTC-7, Justin du coeur wrote:
>
> The keyword here is "auto".  Autodowning is an *incredibly braindead* 
> algorithm for dealing with nodes coming out of service, and if you use it 
> in production you more or less guarantee disaster, because that algorithm 
> can't cope with cluster partition.  You *do* need to deal with downing, but 
> you have to get something smarter than that.
>
> Frankly, if you're already hooking into AWS, I *suspect* the best approach 
> is to leverage that -- when a node goes offline, you have some code to 
> detect that through the ECS APIs, react to it, and manually down that node. 
>  (I'm planning on something along those lines for my system, but haven't 
> actually tried yet.)  But whether you do that or something else, you've got 
> to add *something* that does downing.
>
> I believe the official party line is "Buy a Lightbend Subscription", 
> through which you can get their Split Brain Resolver, which is a fairly 
> battle-hardened module for dealing with this problem.  That's not strictly 
> necessary, but you *do* need to have a reliable solution...
>
> On Wed, Aug 3, 2016 at 8:42 PM, Eric Swenson  > wrote:
>
>> We have an akka-cluster/sharding application deployed an AWS/ECS, where 
>> each instance of the application is a Docker container.  An ECS service 
>> launches N instances of the application based on configuration data.  It is 
>> not possible to know, for certain, the IP addresses of the cluster 
>> members.  Upon startup, before the AkkaSystem is created, the code 
>> currently polls AWS and determines the IP addresses of all the Docker hosts 
>> (which potentially could run the akka application).  It sets these IP 
>> addresses as the seed nodes before bringing up the akka cluster system. The 
>> configuration for these has, up until yesterday always included the 
>> akka.cluster.auto-down-unreachable-after configuration setting.  And it has 
>> always worked.  Furthermore, it supports two very critical requirements:
>>
>> a) an instance of the application can be removed at any time, due to 
>> scaling or rolling updates
>> b) an instance of the application can be added at any time, due to 
>> scaling or rolling updates
>>
>> On the advice of an Akka expert on the Gitter channel, I removed the 
>> auto-down-unreachable-after setting, which, as documented, is dangerous for 
>> production.  As a result the system no longer supports rolling updates.  A 
>> rolling update occurs thus:  a 

Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-05 Thread Eric Swenson
Thanks Justin.  Makes good sense.  And I believe you've explained the lock 
up -- the shard coordinator singleton was probably on the unreacheable node 
(actually, it must have been, because prior to the second node's coming up, 
the first node was the only node) and since the cluster-of-two determined 
the first node to be unreachable, there was no shard coordinator accessible 
until the first node was "downed" and the shard coordinator moved to the 
second node.  

I appreciate all your insight. -- Eric

On Thursday, August 4, 2016 at 3:04:12 PM UTC-7, Justin du coeur wrote:
>
> It does do reassignment -- but it has to know to do that.  Keep in mind 
> that "down" is the master switch here: until the node is downed, the rest 
> of the system doesn't *know* that NodeA should be avoided.  I haven't dug 
> into that particular code, but I assume from what you're saying that the 
> allocation algorithm doesn't take unreachability into account when choosing 
> where to allocate the shard, just up/down.  I suspect that unreachability 
> is too local and transient to use as the basis for these allocations.
>
> Keep in mind that you're looking at this from a relatively all-knowing 
> global perspective, but each node is working from a very localized and 
> imperfect one.  All it knows is "I can't currently reach NodeA".  It has no 
> a priori way of knowing whether NodeA has been taken offline (so it should 
> be avoided), or there's simply been a transient network glitch between here 
> and there (so things are *mostly* business as usual).  Downing is how you 
> tell it, "No, really, stop using this node"; until then, most of the code 
> assumes that the more-common transient situation is the case.  It's 
> *probably* possible to take unreachability into account in the case you're 
> describing, but it doesn't surprise me if that's not true.
>
> Also, keep in mind that, IIRC, there are a few cluster singletons involved 
> here, at least behind the scenes.  If NodeA currently owns one of the key 
> singletons (such as the ShardCoordinator), and it hasn't been downed, I 
> imagine the rest of the cluster is going to *quickly* lock up, because the 
> result is that nobody is authorized to make these sorts of allocation 
> decisions.
>
> All that said -- keep in mind, I'm just a user of this stuff, and am 
> talking at the edges of my knowledge.  Konrad's the actual expert...
>
> On Thu, Aug 4, 2016 at 4:59 PM, Eric Swenson  > wrote:
>
>> While I'm in the process of implementing your proposed solution, I did 
>> want to make sure I understood why I'm seeing the failures I'm seeing when 
>> a node is taken offline, auto-down is disabled, and no one is handling the 
>> UnreachableNode message.  Let me try to explain what I think is happening 
>> and perhaps you (or someone else who knows more about this than I) can 
>> confirm or refute.
>>
>> In the case of akka-cluster-sharding, a shard might exist on the 
>> unreachable node.  Since the node is not yet marked as "down", the cluster 
>> simply cannot handle an incoming message for that shard.  To create another 
>> sharded actor on an available cluster node might duplicate the unreachable 
>> node state.  In the case of akka-persistence actors, even though a new 
>> shard actor could resurrect any journaled state, we cannot be sure that the 
>> old unreachable node might not at any time, add other events to the 
>> journal, or come online and try to continue operating on the shard.
>>
>> Is that the reason why I see the following behavior:  NodeA is online.  
>> NodeB comes online and joins the cluster.  A request comes in from 
>> akka-http and is sent to the shard region.  It goes to NodeA which creates 
>> an actor to handle the sharded object.  NodeA is taken offline (unbeknownst 
>> to the akka-cluster).  Another message for the above-mentioned shard comes 
>> in from akka-http and is sent to the shard region. The shard region can't 
>> reach NodeA.  NodeA isn't marked as down.  So the shard region cannot 
>> create another actor (on an available Node). It can only wait (until 
>> timeout) for NodeA to become reachable.  Since, in my scenario, NodeA will 
>> never become reachable and NodeB is the only one online, all requests for 
>> old shards timeout.
>>
>> If the above logic is true, I have one last issue:  In the above 
>> scenario, if a message comes into the shard region for a shard that WOULD 
>> HAVE BEEN allocated to NodeA but has never yet been assigned to an actor on 
>> NodeA, and NodeA is unreachable, why 

Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-05 Thread Eric Swenson
Yes, thanks. I'll take a look at the default implementation and explore 
possible other implementations. I suspect, however, that the solution I've 
now implemented to "down" unreachable nodes if the AWS/ECS cluster says 
they are no longer there, will address my issues.  

On Friday, August 5, 2016 at 2:38:54 AM UTC-7, Johan Andrén wrote:
>
> You can however implement your own 
> akka.cluster.sharding.ShardCoordinator.ShardAllocationStrategy which will 
> allow you to take whatever you want into account, and deal with the 
> consequences thereof ofc ;) 
>
> --
> Johan
>
> On Friday, August 5, 2016 at 12:04:12 AM UTC+2, Justin du coeur wrote:
>>
>> It does do reassignment -- but it has to know to do that.  Keep in mind 
>> that "down" is the master switch here: until the node is downed, the rest 
>> of the system doesn't *know* that NodeA should be avoided.  I haven't dug 
>> into that particular code, but I assume from what you're saying that the 
>> allocation algorithm doesn't take unreachability into account when choosing 
>> where to allocate the shard, just up/down.  I suspect that unreachability 
>> is too local and transient to use as the basis for these allocations.
>>
>> Keep in mind that you're looking at this from a relatively all-knowing 
>> global perspective, but each node is working from a very localized and 
>> imperfect one.  All it knows is "I can't currently reach NodeA".  It has no 
>> a priori way of knowing whether NodeA has been taken offline (so it should 
>> be avoided), or there's simply been a transient network glitch between here 
>> and there (so things are *mostly* business as usual).  Downing is how you 
>> tell it, "No, really, stop using this node"; until then, most of the code 
>> assumes that the more-common transient situation is the case.  It's 
>> *probably* possible to take unreachability into account in the case you're 
>> describing, but it doesn't surprise me if that's not true.
>>
>> Also, keep in mind that, IIRC, there are a few cluster singletons 
>> involved here, at least behind the scenes.  If NodeA currently owns one of 
>> the key singletons (such as the ShardCoordinator), and it hasn't been 
>> downed, I imagine the rest of the cluster is going to *quickly* lock up, 
>> because the result is that nobody is authorized to make these sorts of 
>> allocation decisions.
>>
>> All that said -- keep in mind, I'm just a user of this stuff, and am 
>> talking at the edges of my knowledge.  Konrad's the actual expert...
>>
>> On Thu, Aug 4, 2016 at 4:59 PM, Eric Swenson > > wrote:
>>
>>> While I'm in the process of implementing your proposed solution, I did 
>>> want to make sure I understood why I'm seeing the failures I'm seeing when 
>>> a node is taken offline, auto-down is disabled, and no one is handling the 
>>> UnreachableNode message.  Let me try to explain what I think is happening 
>>> and perhaps you (or someone else who knows more about this than I) can 
>>> confirm or refute.
>>>
>>> In the case of akka-cluster-sharding, a shard might exist on the 
>>> unreachable node.  Since the node is not yet marked as "down", the cluster 
>>> simply cannot handle an incoming message for that shard.  To create another 
>>> sharded actor on an available cluster node might duplicate the unreachable 
>>> node state.  In the case of akka-persistence actors, even though a new 
>>> shard actor could resurrect any journaled state, we cannot be sure that the 
>>> old unreachable node might not at any time, add other events to the 
>>> journal, or come online and try to continue operating on the shard.
>>>
>>> Is that the reason why I see the following behavior:  NodeA is online.  
>>> NodeB comes online and joins the cluster.  A request comes in from 
>>> akka-http and is sent to the shard region.  It goes to NodeA which creates 
>>> an actor to handle the sharded object.  NodeA is taken offline (unbeknownst 
>>> to the akka-cluster).  Another message for the above-mentioned shard comes 
>>> in from akka-http and is sent to the shard region. The shard region can't 
>>> reach NodeA.  NodeA isn't marked as down.  So the shard region cannot 
>>> create another actor (on an available Node). It can only wait (until 
>>> timeout) for NodeA to become reachable.  Since, in my scenario, NodeA will 
>>> never become reachable and NodeB is the only one online, all requests

[akka-user] Akka cluster node shutting down in the middle of processing requests

2016-08-05 Thread Eric Swenson
Our akka-cluster-sharding service went down last night. In the middle of 
processing akka-http requests (and sending these requests to a sharding 
region for processing) on a 10-node cluster, one of the requests got an 
"ask timeout" exception:

[ERROR] [08/05/2016 05:04:51.077] 
[ClusterSystem-akka.actor.default-dispatcher-16] 
[akka.actor.ActorSystemImpl(ClusterSystem)] Error during processing of 
request HttpRequest(HttpMetho\

d(GET),http://eim.staging.example.com/eim/check/a0afbad4-69a8-4487-a6fb-f3e884a8d0aa?cache=false&timeout=15,List(Host:
 
eim.staging.example.com, X-Real-Ip: 10.0.3.9, X-Forwarded-Fo\

r: 10.0.3.9, Connection: upgrade, Accept: */*, Accept-Encoding: gzip, 
deflate, compress, Authorization: Bearer 
aaa-1ZdrFpgR5AyOGa69Q2s3fwv_y5zz9UCL5F85Hc, User-Agent: python-requests\

/2.2.1 CPython/2.7.6 Linux/3.13.0-74-generic, Timeout-Access: 
),HttpEntity.Strict(application/json,),HttpProtocol(HTTP/1.1))  
   

akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://ClusterSystem/system/sharding/ExperimentInstance#-1675878517]]
 
had already been terminated. Sender[null] sent the message of t\

ype "com.genecloud.eim.ExperimentInstance$Commands$CheckExperiment".   

 

As the error message says, the reason for the ask timeout was because the 
actor (sharding region?) had been terminated.  

Looking back in the logs, I see that everything was going well for quite 
some time, until the following:


[DEBUG] [08/05/2016 05:04:50.480] 
[ClusterSystem-akka.actor.default-dispatcher-2] 
[akka.tcp://ClusterSystem@10.0.3.103:2552/system/IO-TCP/selectors/$a/955] 
Resolving login.dev.example.com before connecting

[DEBUG] [08/05/2016 05:04:50.480] 
[ClusterSystem-akka.actor.default-dispatcher-2] 
[akka.tcp://ClusterSystem@10.0.3.103:2552/system/IO-TCP/selectors/$a/955] 
Attempting connection to [login.dev.example.com/52.14.30.100:443]

[DEBUG] [08/05/2016 05:04:50.481] 
[ClusterSystem-akka.actor.default-dispatcher-6] 
[akka.tcp://ClusterSystem@10.0.3.103:2552/system/IO-TCP/selectors/$a/955] 
Connection established to [login.dev.example.com:443]

[DEBUG] [08/05/2016 05:04:50.481] 
[ClusterSystem-akka.actor.default-dispatcher-5] 
[akka://ClusterSystem/system/IO-TCP] no longer watched by 
Actor[akka://ClusterSystem/user/StreamSupervisor-7/$$H#-1778344261]

[DEBUG] [08/05/2016 05:04:50.481] 
[ClusterSystem-akka.actor.default-dispatcher-5] 
[akka://ClusterSystem/system/IO-TCP/selectors/$a/955] now watched by 
Actor[akka://ClusterSystem/user/StreamSupervisor-7/$$H#-1778344261]

[DEBUG] [08/05/2016 05:04:50.483] 
[ClusterSystem-akka.actor.default-dispatcher-17] 
[akka://ClusterSystem/user/StreamSupervisor-7] now supervising 
Actor[akka://ClusterSystem/user/Strea
mSupervisor-7/flow-993-1-unknown-operation#819117275]

[DEBUG] [08/05/2016 05:04:50.484] 
[ClusterSystem-akka.actor.default-dispatcher-2] 
[akka://ClusterSystem/user/StreamSupervisor-7/flow-993-1-unknown-operation] 
started (akka.stream.impl.io.TLSActor@66dd942c)

[INFO] [08/05/2016 05:04:50.526] 
[ClusterSystem-akka.actor.default-dispatcher-17] 
[akka.cluster.Cluster(akka://ClusterSystem)] Cluster Node 
[akka.tcp://ClusterSystem@10.0.3.103:2552] - Shutting down myself

[INFO] [08/05/2016 05:04:50.527] 
[ClusterSystem-akka.actor.default-dispatcher-17] 
[akka.cluster.Cluster(akka://ClusterSystem)] Cluster Node 
[akka.tcp://ClusterSystem@10.0.3.103:2552] - Shutting down...

[DEBUG] [08/05/2016 05:04:50.528] 
[ClusterSystem-akka.actor.default-dispatcher-3] 
[akka://ClusterSystem/system/cluster/core] stopping

[DEBUG] [08/05/2016 05:04:50.528] 
[ClusterSystem-akka.actor.default-dispatcher-16] 
[akka://ClusterSystem/system/cluster/heartbeatReceiver] stopped

[DEBUG] [08/05/2016 05:04:50.539] 
[ClusterSystem-akka.actor.default-dispatcher-16] 
[akka://ClusterSystem/system/cluster/metrics] stopped

[DEBUG] [08/05/2016 05:04:50.540] 
[ClusterSystem-akka.actor.default-dispatcher-18] 
[akka://ClusterSystem/system/cluster] stopping

[INFO] [08/05/2016 05:04:50.573] 
[ClusterSystem-akka.actor.default-dispatcher-18] 
[akka.tcp://ClusterSystem@10.0.3.103:2552/system/sharding/ExperimentInstanceCoordinator]
 
Self removed\

, stopping ClusterSingletonManager

[DEBUG] [08/05/2016 05:04:50.573] 
[ClusterSystem-akka.actor.default-dispatcher-18] 
[akka://ClusterSystem/system/sharding/ExperimentInstanceCoordinator] 
stopping


As you can see, from the "Resolving..." and "Attempting connection" log 
messages, an actor was happily sending off an HTTP request to another 
microservice using TLS, but just after this point, the cluster node said it 
was "Shutting down myself".  This killed the ClusterSingletonManager.  From 
that point on, all incoming requests to the shard region were rejected 
(because it was down).


Now the node was NOT down, and there are tons of messages after this point 
in the log -- and any request t

[akka-user] Re: Akka cluster node shutting down in the middle of processing requests

2016-08-05 Thread Eric Swenson
Also, what does this message mean?  I saw it earlier on in the logs:

[DEBUG] [08/05/2016 05:04:50.450] 
[ClusterSystem-akka.actor.default-dispatcher-17] 
[akka://ClusterSystem/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FClusterSystem%4010.0.3.176%3A2552-6]
 
unhandled message from 
Actor[akka://ClusterSystem/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FClusterSystem%4010.0.3.176%3A2552-6#1200432312]:
 
Ungate

On Friday, August 5, 2016 at 12:58:55 PM UTC-7, Eric Swenson wrote:
>
> Our akka-cluster-sharding service went down last night. In the middle of 
> processing akka-http requests (and sending these requests to a sharding 
> region for processing) on a 10-node cluster, one of the requests got an 
> "ask timeout" exception:
>
> [ERROR] [08/05/2016 05:04:51.077] 
> [ClusterSystem-akka.actor.default-dispatcher-16] 
> [akka.actor.ActorSystemImpl(ClusterSystem)] Error during processing of 
> request HttpRequest(HttpMetho\
>
> d(GET),
> http://eim.staging.example.com/eim/check/a0afbad4-69a8-4487-a6fb-f3e884a8d0aa?cache=false&timeout=15,List(Host:
>  
> eim.staging.example.com, X-Real-Ip: 10.0.3.9, X-Forwarded-Fo\
>
> r: 10.0.3.9, Connection: upgrade, Accept: */*, Accept-Encoding: gzip, 
> deflate, compress, Authorization: Bearer 
> aaa-1ZdrFpgR5AyOGa69Q2s3fwv_y5zz9UCL5F85Hc, User-Agent: python-requests\
>
> /2.2.1 CPython/2.7.6 Linux/3.13.0-74-generic, Timeout-Access: 
> ),HttpEntity.Strict(application/json,),HttpProtocol(HTTP/1.1))  
>
>
> akka.pattern.AskTimeoutException: 
> Recipient[Actor[akka://ClusterSystem/system/sharding/ExperimentInstance#-1675878517]]
>  
> had already been terminated. Sender[null] sent the message of t\
>
> ype "com.genecloud.eim.ExperimentInstance$Commands$CheckExperiment".   
> 
>  
>
> As the error message says, the reason for the ask timeout was because the 
> actor (sharding region?) had been terminated.  
>
> Looking back in the logs, I see that everything was going well for quite 
> some time, until the following:
>
>
> [DEBUG] [08/05/2016 05:04:50.480] 
> [ClusterSystem-akka.actor.default-dispatcher-2] [akka.tcp://
> ClusterSystem@10.0.3.103:2552/system/IO-TCP/selectors/$a/955] Resolving 
> login.dev.example.com before connecting
>
> [DEBUG] [08/05/2016 05:04:50.480] 
> [ClusterSystem-akka.actor.default-dispatcher-2] [akka.tcp://
> ClusterSystem@10.0.3.103:2552/system/IO-TCP/selectors/$a/955] Attempting 
> connection to [login.dev.example.com/52.14.30.100:443]
>
> [DEBUG] [08/05/2016 05:04:50.481] 
> [ClusterSystem-akka.actor.default-dispatcher-6] [akka.tcp://
> ClusterSystem@10.0.3.103:2552/system/IO-TCP/selectors/$a/955] Connection 
> established to [login.dev.example.com:443]
>
> [DEBUG] [08/05/2016 05:04:50.481] 
> [ClusterSystem-akka.actor.default-dispatcher-5] 
> [akka://ClusterSystem/system/IO-TCP] no longer watched by 
> Actor[akka://ClusterSystem/user/StreamSupervisor-7/$$H#-1778344261]
>
> [DEBUG] [08/05/2016 05:04:50.481] 
> [ClusterSystem-akka.actor.default-dispatcher-5] 
> [akka://ClusterSystem/system/IO-TCP/selectors/$a/955] now watched by 
> Actor[akka://ClusterSystem/user/StreamSupervisor-7/$$H#-1778344261]
>
> [DEBUG] [08/05/2016 05:04:50.483] 
> [ClusterSystem-akka.actor.default-dispatcher-17] 
> [akka://ClusterSystem/user/StreamSupervisor-7] now supervising 
> Actor[akka://ClusterSystem/user/Strea
> mSupervisor-7/flow-993-1-unknown-operation#819117275]
>
> [DEBUG] [08/05/2016 05:04:50.484] 
> [ClusterSystem-akka.actor.default-dispatcher-2] 
> [akka://ClusterSystem/user/StreamSupervisor-7/flow-993-1-unknown-operation] 
> started (akka.stream.impl.io.TLSActor@66dd942c)
>
> [INFO] [08/05/2016 05:04:50.526] 
> [ClusterSystem-akka.actor.default-dispatcher-17] 
> [akka.cluster.Cluster(akka://ClusterSystem)] Cluster Node [akka.tcp://
> ClusterSystem@10.0.3.103:2552] - Shutting down myself
>
> [INFO] [08/05/2016 05:04:50.527] 
> [ClusterSystem-akka.actor.default-dispatcher-17] 
> [akka.cluster.Cluster(akka://ClusterSystem)] Cluster Node [akka.tcp://
> ClusterSystem@10.0.3.103:2552] - Shutting down...
>
> [DEBUG] [08/05/2016 05:04:50.528] 
> [ClusterSystem-akka.actor.default-dispatcher-3] 
> [akka://ClusterSystem/system/cluster/core] stopping
>
> [DEBUG] [08/05/2016 05:04:50.528] 
> [ClusterSystem-akka.actor.default-dispatcher-16] 
> [akka://ClusterSystem/system/cluster/heartbeatReceiver] stopped
>
> [DEBUG] [08/05/2016 05:04:50.539] 
> [ClusterSystem-akka.actor.default-dispatcher-16] 
> [akka://ClusterSystem/system/cluster/metrics] stopped
>
> [DE

[akka-user] Re: Akka cluster node shutting down in the middle of processing requests

2016-08-05 Thread Eric Swenson
One more clue as to the cluster daemon's shutting itself down.  Earlier in 
the logs (although prior to several successful requests being handled), I 
find this:

[INFO] [08/05/2016 05:04:45.042] 
[ClusterSystem-akka.actor.default-dispatcher-5] 
[akka.cluster.Cluster(akka://ClusterSystem)] Cluster Node 
[akka.tcp://ClusterSystem@10.0.3.103:2552] - Leader can currently not 
perform its duties, reachability status: 
[akka.tcp://ClusterSystem@10.0.3.103:2552 -> 
akka.tcp://ClusterSystem@10.0.3.102:2552: Unreachable [Unreachable] (16), 
akka.tcp://ClusterSystem@10.0.3.103:2552 -> 
akka.tcp://ClusterSystem@10.0.3.104:2552: Unreachable [Unreachable] (17), 
akka.tcp://ClusterSystem@10.0.3.103:2552 -> 
akka.tcp://ClusterSystem@10.0.3.176:2552: Unreachable [Unreachable] (18), 
akka.tcp://ClusterSystem@10.0.3.103:2552 -> 
akka.tcp://ClusterSystem@10.0.3.240:2552: Unreachable [Unreachable] (19)], 
member status: [akka.tcp://ClusterSystem@10.0.3.102:2552 Up seen=false, 
akka.tcp://ClusterSystem@10.0.3.103:2552 Up seen=true, 
akka.tcp://ClusterSystem@10.0.3.104:2552 Up seen=false, 
akka.tcp://ClusterSystem@10.0.3.176:2552 Up seen=false, 
akka.tcp://ClusterSystem@10.0.3.240:2552 Up seen=false]


All these log messages are from the node at IP address 10.0.3.103.  So I'm 
assuming this means the Leader is THIS node.  It seems to be saying that it 
cannot reach all the other cluster members, and because of that, it cannot 
do its job. This probably accounts for why it decided to shut itself down.  


There were 6 AWS EC2 instances running this application at the time (not 
10, as I said in an earlier message).  However, the cluster membership 
above, only shows 5 members at the time of this log message.  Not sure what 
happened to the other one.  


[akka.tcp://ClusterSystem@10.0.3.102:2552 Up seen=false,

 akka.tcp://ClusterSystem@10.0.3.103:2552 Up seen=true,

 akka.tcp://ClusterSystem@10.0.3.104:2552 Up seen=false,

 akka.tcp://ClusterSystem@10.0.3.176:2552 Up seen=false,

 akka.tcp://ClusterSystem@10.0.3.240:2552 Up seen=false]


I'm going to assume, not having any other evidence, that AWS/EC2 
experienced some network issue at the time in question, and consequently 
this node was not able to talk to the rest of the cluster and therefore 
this member (the leader) shut down.  I only have logs for one of the other 
5 cluster nodes, so I will check to see what that other node thought about 
all this at the time.  But I'm not very comfortable with the robustness of 
akka here.  I would have thought that the other cluster members could have, 
perhaps, noticing that the Leader was unreachable (assuming they couldn't 
reach it), and because I had auto-down-unreachable-after set (yes, yes, 
I've sense replaced this with manual downing logic -- but that is on our 
dev deployment and this issue happened on our staging deployment), elected 
a new leader and carried on -- even if this node became catatonic.  


This raises another point:  When the ClusterDaemon shuts itself down, it 
would appear that I should handle some event here (not sure how to do 
that), to cause the entire JVM to terminate.  This would cause AWS/ECS to 
launch a new instance to join the remaining cluster.


Thoughts?  -- Eric





-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: from akka.persistence.journal.ReplayFilter Invalid replayed event [1] in buffer from old writer

2016-08-07 Thread Eric Swenson
Thanks, Patrik.  That is precisely what happened. I had been using 
auto-down-unreachable-after, and while this appeared to work fine in the 
normal "rolling update" mode of deploying nodes to our cluster, it had the 
split-brain effects when there were transient cases of unreachability.  I 
have sense replaced the use of the auto-down-unreachable-after with an 
implementation that, upon detecting a MemberUnreachable cluster event, 
queries AWS/ECS to determine whether the node is still present in the 
cluster (and service).  If not, it marks the node down, otherwise, it 
doesn't.  So far, this seems to work just fine.  (And yes, we did clean up 
the duplicate persistence records prior to restarting the cluster). 

And yes, I'm aware of the Split Brain Resolver from Lightbend. I'm sure it 
would work well too.  At this point in our journey, where we have no 
revenue and little funds, we're looking primarily to open source and 
roll-our-own solutions. But as we get into production and have customers, 
we'll likely take advantage of Lightbend services and products.

Thanks again. -- Eric

On Sunday, August 7, 2016 at 9:48:08 AM UTC-7, Patrik Nordwall wrote:
>
> It's typically caused by multiple persistent actors with the same 
> persistenceId running at the same time. E.g. because there were a network 
> split and your cluster was split into two separate clusters and thereby 
> starting multiple persistent actors. That is why we so strongly recommend 
> against using auto-downing and instead recommend Split Brain Resolver 
> <http://doc.akka.io/docs/akka/rp-current/scala/split-brain-resolver.html> 
> or similar.
>
> Now you need to cleanup the corrupted data before starting the system.
>
> http://doc.akka.io/docs/akka/2.4/scala/cluster-sharding.html#Removal_of_Internal_Cluster_Sharding_Data
>
> Have you changed the default mode=repair-by-discard-old in the config of 
> the replay filter?
>
> https://github.com/akka/akka/blob/master/akka-persistence/src/main/resources/reference.conf#L131
>
> Regards,
> Patrik
>
> On Tue, Aug 2, 2016 at 9:44 PM, Eric Swenson  > wrote:
>
>> I'm getting this error consistently now, and don't know why this is 
>> happening nor what to do about it.  I form the persistentId this way:
>>
>> override def persistenceId: String = self.path.parent.parent.name + 
>> "-" + self.path.name
>>
>> So I don't see how I could have two persisters with the same id.  I'm 
>> unable to bring up my akka cluster due to this error.
>>
>> Any suggestions?  I'm running akka 2.4.8 with akka.persistence:
>>
>> journal.plugin = "cassandra-journal"
>> snapshot-store.plugin = "cassandra-snapshot-store"
>>
>>
>> On Monday, April 25, 2016 at 3:34:21 AM UTC-7, Tim Pigden wrote:
>>>
>>> Hi
>>> I'm getting this message. I'm probably doing something wrong but any 
>>> idea what that might be? I know what messages I'm persisting and this 
>>> particular test is one in which I kill off my persistor and restart it.
>>> Or does it indicate the message is failing to deserialize or something 
>>> like that
>>>
>>> 2016-04-25 10:33:47,570 - ERROR - from 
>>> com.optrak.opkakka.ddd.persistence.SimplePersistor Persistence failure when 
>>> replaying events for persistenceId [shd-matrix-testId]. Last known sequence 
>>> number [0] 
>>> java.lang.IllegalStateException: Invalid replayed event [1] in buffer from 
>>> old writer [f6bd09c4-1f1c-4710-8cf6-c6f0776f39d3] with persistenceId 
>>> [shd-matrix-testId]
>>>at 
>>> akka.persistence.journal.ReplayFilter$$anonfun$receive$1.applyOrElse(ReplayFilter.scala:125)
>>>at akka.actor.Actor$class.aroundReceive(Actor.scala:480)
>>>at 
>>> akka.persistence.journal.ReplayFilter.aroundReceive(ReplayFilter.scala:50)
>>>
>>>
>>>
>>> Any suggests much appreciated!
>>>
>>>
>>>
>>>
>>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@go

Re: [akka-user] Re: from akka.persistence.journal.ReplayFilter Invalid replayed event [1] in buffer from old writer

2016-08-08 Thread Eric Swenson
We have not changed the default mode=repair-by-discard-old config value for the 
replay-filter.  Should we?  — Eric

> On Aug 7, 2016, at 9:47 AM, Patrik Nordwall  wrote:
> 
> It's typically caused by multiple persistent actors with the same 
> persistenceId running at the same time. E.g. because there were a network 
> split and your cluster was split into two separate clusters and thereby 
> starting multiple persistent actors. That is why we so strongly recommend 
> against using auto-downing and instead recommend Split Brain Resolver 
> <http://doc.akka.io/docs/akka/rp-current/scala/split-brain-resolver.html> or 
> similar.
> 
> Now you need to cleanup the corrupted data before starting the system.
> http://doc.akka.io/docs/akka/2.4/scala/cluster-sharding.html#Removal_of_Internal_Cluster_Sharding_Data
>  
> <http://doc.akka.io/docs/akka/2.4/scala/cluster-sharding.html#Removal_of_Internal_Cluster_Sharding_Data>
> 
> Have you changed the default mode=repair-by-discard-old in the config of the 
> replay filter?
> https://github.com/akka/akka/blob/master/akka-persistence/src/main/resources/reference.conf#L131
>  
> <https://github.com/akka/akka/blob/master/akka-persistence/src/main/resources/reference.conf#L131>
> 
> Regards,
> Patrik
> 
> On Tue, Aug 2, 2016 at 9:44 PM, Eric Swenson  <mailto:e...@swenson.org>> wrote:
> I'm getting this error consistently now, and don't know why this is happening 
> nor what to do about it.  I form the persistentId this way:
> 
> override def persistenceId: String = self.path.parent.parent.name 
> <http://self.path.parent.parent.name/> + "-" + self.path.name 
> <http://self.path.name/>
> 
> So I don't see how I could have two persisters with the same id.  I'm unable 
> to bring up my akka cluster due to this error.
> 
> Any suggestions?  I'm running akka 2.4.8 with akka.persistence:
> 
> journal.plugin = "cassandra-journal"
> snapshot-store.plugin = "cassandra-snapshot-store"
> 
> On Monday, April 25, 2016 at 3:34:21 AM UTC-7, Tim Pigden wrote:
> Hi
> I'm getting this message. I'm probably doing something wrong but any idea 
> what that might be? I know what messages I'm persisting and this particular 
> test is one in which I kill off my persistor and restart it.
> Or does it indicate the message is failing to deserialize or something like 
> that
> 
> 2016-04-25 10:33:47,570 - ERROR - from 
> com.optrak.opkakka.ddd.persistence.SimplePersistor Persistence failure when 
> replaying events for persistenceId [shd-matrix-testId]. Last known sequence 
> number [0] 
> java.lang.IllegalStateException: Invalid replayed event [1] in buffer from 
> old writer [f6bd09c4-1f1c-4710-8cf6-c6f0776f39d3] with persistenceId 
> [shd-matrix-testId]
>at 
> akka.persistence.journal.ReplayFilter$$anonfun$receive$1.applyOrElse(ReplayFilter.scala:125)
>at akka.actor.Actor$class.aroundReceive(Actor.scala:480)
>at 
> akka.persistence.journal.ReplayFilter.aroundReceive(ReplayFilter.scala:50)
> 
> 
> Any suggests much appreciated!
> 
> 
> 
> 
> -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/ <http://akka.io/docs/>
> >>>>>>>>>> Check the FAQ: 
> >>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html 
> >>>>>>>>>> <http://doc.akka.io/docs/akka/current/additional/faq.html>
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user 
> >>>>>>>>>> <https://groups.google.com/group/akka-user>
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+unsubscr...@googlegroups.com 
> <mailto:akka-user+unsubscr...@googlegroups.com>.
> To post to this group, send email to akka-user@googlegroups.com 
> <mailto:akka-user@googlegroups.com>.
> Visit this group at https://groups.google.com/group/akka-user 
> <https://groups.google.com/group/akka-user>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.
> 
> 
> 
> -- 
> 
> Patrik Nordwall
> Akka Tech Lead
> Lightbend <http://www.lightbend.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
> 
> 
> 
> -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/ <http://akka.io/docs/>
> >>>>>>>>>> Check the FAQ: 
>

Re: [akka-user] Re: from akka.persistence.journal.ReplayFilter Invalid replayed event [1] in buffer from old writer

2016-08-09 Thread Eric Swenson
Just checked again. No override of that config parameter. Yet, these were, 
indeed, logged as errors not warnings.  -- Eric

On Monday, August 8, 2016 at 11:28:33 PM UTC-7, Patrik Nordwall wrote:
>
>
>
> On Mon, Aug 8, 2016 at 10:06 PM, Eric Swenson  > wrote:
>
>> We have not changed the default mode=repair-by-discard-old config value 
>> for the replay-filter.  Should we?  — Eric
>>
>
> Then I can't understand how you can get the ERROR as in the first message 
> in this thread
>
> IllegalStateException: Invalid replayed event [1] in buffer from old writer
>
> Then this should be logged at WARNING level and it should have discarded 
> the event from the "old" writer.
>
>  
>
>>
>> On Aug 7, 2016, at 9:47 AM, Patrik Nordwall > > wrote:
>>
>> It's typically caused by multiple persistent actors with the same 
>> persistenceId running at the same time. E.g. because there were a network 
>> split and your cluster was split into two separate clusters and thereby 
>> starting multiple persistent actors. That is why we so strongly recommend 
>> against using auto-downing and instead recommend Split Brain Resolver 
>> <http://doc.akka.io/docs/akka/rp-current/scala/split-brain-resolver.html>
>>  or similar.
>>
>> Now you need to cleanup the corrupted data before starting the system.
>>
>> http://doc.akka.io/docs/akka/2.4/scala/cluster-sharding.html#Removal_of_Internal_Cluster_Sharding_Data
>>
>> Have you changed the default mode=repair-by-discard-old in the config of 
>> the replay filter?
>>
>> https://github.com/akka/akka/blob/master/akka-persistence/src/main/resources/reference.conf#L131
>>
>> Regards,
>> Patrik
>>
>> On Tue, Aug 2, 2016 at 9:44 PM, Eric Swenson > > wrote:
>>
>>> I'm getting this error consistently now, and don't know why this is 
>>> happening nor what to do about it.  I form the persistentId this way:
>>>
>>> override def persistenceId: String = self.path.parent.parent.name + 
>>> "-" + self.path.name
>>>
>>> So I don't see how I could have two persisters with the same id.  I'm 
>>> unable to bring up my akka cluster due to this error.
>>>
>>> Any suggestions?  I'm running akka 2.4.8 with akka.persistence:
>>>
>>> journal.plugin = "cassandra-journal"
>>> snapshot-store.plugin = "cassandra-snapshot-store"
>>>
>>>
>>> On Monday, April 25, 2016 at 3:34:21 AM UTC-7, Tim Pigden wrote:
>>>>
>>>> Hi
>>>> I'm getting this message. I'm probably doing something wrong but any 
>>>> idea what that might be? I know what messages I'm persisting and this 
>>>> particular test is one in which I kill off my persistor and restart it.
>>>> Or does it indicate the message is failing to deserialize or something 
>>>> like that
>>>>
>>>> 2016-04-25 10:33:47,570 - ERROR - from 
>>>> com.optrak.opkakka.ddd.persistence.SimplePersistor Persistence failure 
>>>> when replaying events for persistenceId [shd-matrix-testId]. Last known 
>>>> sequence number [0] 
>>>> java.lang.IllegalStateException: Invalid replayed event [1] in buffer from 
>>>> old writer [f6bd09c4-1f1c-4710-8cf6-c6f0776f39d3] with persistenceId 
>>>> [shd-matrix-testId]
>>>>at 
>>>> akka.persistence.journal.ReplayFilter$$anonfun$receive$1.applyOrElse(ReplayFilter.scala:125)
>>>>at akka.actor.Actor$class.aroundReceive(Actor.scala:480)
>>>>at 
>>>> akka.persistence.journal.ReplayFilter.aroundReceive(ReplayFilter.scala:50)
>>>>
>>>>
>>>>
>>>> Any suggests much appreciated!
>>>>
>>>>
>>>>
>>>>
>>>>
>>> -- 
>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>> >>>>>>>>>> Check the FAQ: 
>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>> >>>>>>>>>> Search the archives: 
>>> https://groups.google.com/group/akka-user
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to akka-user+...@googlegroups.com .
>>> To post to this group, send email to akka...@googlegroups.com 

Re: [akka-user] Re: Akka cluster node shutting down in the middle of processing requests

2016-08-12 Thread Eric Swenson
Thanks. I've added that code fragment to our application.  

On Thursday, August 11, 2016 at 11:05:12 AM UTC-7, Patrik Nordwall wrote:
>
> I have not looked at the logs but you find answer to your last question in 
> http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html#How_To_Cleanup_when_Member_is_Removed
>
> /Patrik
>
> fre 5 aug. 2016 kl. 22:31 skrev Eric Swenson  >:
>
>> One more clue as to the cluster daemon's shutting itself down.  Earlier 
>> in the logs (although prior to several successful requests being handled), 
>> I find this:
>>
>> [INFO] [08/05/2016 05:04:45.042] 
>> [ClusterSystem-akka.actor.default-dispatcher-5] 
>> [akka.cluster.Cluster(akka://ClusterSystem)] Cluster Node [akka.tcp://
>> ClusterSystem@10.0.3.103:2552] - Leader can currently not perform its 
>> duties, reachability status: [akka.tcp://ClusterSystem@10.0.3.103:2552 
>> -> akka.tcp://ClusterSystem@10.0.3.102:2552: Unreachable [Unreachable] 
>> (16), akka.tcp://ClusterSystem@10.0.3.103:2552 -> akka.tcp://
>> ClusterSystem@10.0.3.104:2552: Unreachable [Unreachable] (17), 
>> akka.tcp://ClusterSystem@10.0.3.103:2552 -> akka.tcp://
>> ClusterSystem@10.0.3.176:2552: Unreachable [Unreachable] (18), 
>> akka.tcp://ClusterSystem@10.0.3.103:2552 -> akka.tcp://
>> ClusterSystem@10.0.3.240:2552: Unreachable [Unreachable] (19)], member 
>> status: [akka.tcp://ClusterSystem@10.0.3.102:2552 Up seen=false, 
>> akka.tcp://ClusterSystem@10.0.3.103:2552 Up seen=true, akka.tcp://
>> ClusterSystem@10.0.3.104:2552 Up seen=false, akka.tcp://
>> ClusterSystem@10.0.3.176:2552 Up seen=false, akka.tcp://
>> ClusterSystem@10.0.3.240:2552 Up seen=false]
>>
>>
>> All these log messages are from the node at IP address 10.0.3.103.  So 
>> I'm assuming this means the Leader is THIS node.  It seems to be saying 
>> that it cannot reach all the other cluster members, and because of that, it 
>> cannot do its job. This probably accounts for why it decided to shut itself 
>> down.  
>>
>>
>> There were 6 AWS EC2 instances running this application at the time (not 
>> 10, as I said in an earlier message).  However, the cluster membership 
>> above, only shows 5 members at the time of this log message.  Not sure what 
>> happened to the other one.  
>>
>>
>> [akka.tcp://ClusterSystem@10.0.3.102:2552 Up seen=false,
>>
>>  akka.tcp://ClusterSystem@10.0.3.103:2552 Up seen=true,
>>
>>  akka.tcp://ClusterSystem@10.0.3.104:2552 Up seen=false,
>>
>>  akka.tcp://ClusterSystem@10.0.3.176:2552 Up seen=false,
>>
>>  akka.tcp://ClusterSystem@10.0.3.240:2552 Up seen=false]
>>
>>
>> I'm going to assume, not having any other evidence, that AWS/EC2 
>> experienced some network issue at the time in question, and consequently 
>> this node was not able to talk to the rest of the cluster and therefore 
>> this member (the leader) shut down.  I only have logs for one of the other 
>> 5 cluster nodes, so I will check to see what that other node thought about 
>> all this at the time.  But I'm not very comfortable with the robustness of 
>> akka here.  I would have thought that the other cluster members could have, 
>> perhaps, noticing that the Leader was unreachable (assuming they couldn't 
>> reach it), and because I had auto-down-unreachable-after set (yes, yes, 
>> I've sense replaced this with manual downing logic -- but that is on our 
>> dev deployment and this issue happened on our staging deployment), elected 
>> a new leader and carried on -- even if this node became catatonic.  
>>
>>
>> This raises another point:  When the ClusterDaemon shuts itself down, it 
>> would appear that I should handle some event here (not sure how to do 
>> that), to cause the entire JVM to terminate.  This would cause AWS/ECS to 
>> launch a new instance to join the remaining cluster.
>>
>>
>> Thoughts?  -- Eric
>>
>>
>>
>>
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email 

[akka-user] onUpstreamFinish not getting called

2016-10-07 Thread Eric Swenson
I have a web service which accepts an inbound payload and runs it through an 
akka-streams pipeline that simulatenously computes the MD5 digest of the 
payload and encrypts that payload.  I’ve implemented a GraphStage that performs 
the crypto that looks like this:

class CipherStage(key: SecretKey, iv: IvParameterSpec, mode: Int) extends 
GraphStage[FlowShape[ByteString, ByteString]] {
  val in = Inlet[ByteString]("Encryptor.in")
  val out = Outlet[ByteString]("Encryptor.out")
  override val shape = FlowShape.of(in, out)

  override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = 
new GraphStageLogic(shape) {
val cipher: Cipher = Cipher.getInstance("AES/CBC/PKCS5Padding")
cipher.init(mode, key, iv)

setHandler(out, new OutHandler {
  override def onPull(): Unit = {
pull(in)
  }
})

setHandler(in, new InHandler {
  override def onPush(): Unit = {
val chunk = grab(in)
emit(out, ByteString(cipher.update(chunk.toArray)))
pull(in)
  }

  override def onUpstreamFinish(): Unit = {
val chunk = cipher.doFinal()
emit(out, ByteString(chunk))
completeStage()
  }

  override def onUpstreamFailure(ex: Throwable): Unit = {
log.info(s”onUpstreamFailure: $ex")
failStage(ex)
  }
})
  }
}
I’m using it (and a similar digest-computing stage like this:

def verifyAndStoreBlock(digest: String, byteStringSource: Source[ByteString, 
Any], userUid: String, sender: ActorRef) = {
  val digestOut = Sink.head[ByteString]
  val blockOut = Sink.head[ByteString]

  val graph: Graph[ClosedShape, (Future[ByteString], Future[ByteString])] = 
GraphDSL.create(digestOut, blockOut)((_,_)) { implicit builder => (dgstOut, 
blkOut) =>
import GraphDSL.Implicits._
val in: Source[ByteString,Any] = byteStringSource
val bcast = builder.add(Broadcast[ByteString](2))
val dgst = new DigestCalculator("MD5")
val encryptor = new CipherStage(blockEncryptionKey, 
blockInitializationVector, Cipher.ENCRYPT_MODE)

in ~> bcast ~> dgst ~> dgstOut
  bcast ~> encryptor ~> blkOut

ClosedShape
  }

  val rg = RunnableGraph.fromGraph[(Future[ByteString], 
Future[ByteString])](graph)

  implicit val materializer = ActorMaterializer()
  val (dgstOutFuture, blkOutFuture) = rg.run()
  implicit val ec = context.dispatcher
  for {
dgst <- dgstOutFuture
blkOut <- blkOutFuture
  } {
val verify = byteStringToHexString(dgst)
if (verify == digest) {
  val dataStoreActor = context.actorOf(DataStoreActor.props(dataStore))
  dataStoreActor ! DataStoreActor.Messages.SaveBlock(digest, blkOut)
  blockStoreLogger.logEvent(PutBlockEvent(digest, userUid))
  sender ! Right(())
} else {
  log.warning(s"Invalid digest: supplied $digest, computed: $verify")
  blockStoreLogger.logEvent(PutBlockFailedEvent(digest, userUid, "digest 
invalid"))
  sender ! Left(PutStatus.DigestInvalid)
}
  }
}
When I get an inbound request, the digesting works correctly, and the 
encryption sort of works.  However, onUpstreamFinish is never called (in the 
CipherStage’s InHandler), and consequently the last AES block (with padding) is 
not emitted correctly.

I modelled the above CipherStage on a similar DigestCalculator stage I found in 
the akka documentation.  In the DigestCalculator graph stage, the 
onUpstreamFinish handler is correctly called.  Why it not called for the 
CipherStage?

— Eric

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: onUpstreamFinish not getting called

2016-10-07 Thread Eric Swenson
I wrote a simple test for my CipherStage and it appears to work fine:

"CipherStage" should "work" in {
  val clearText = "0123456789abcdef"
  val clearSource = Source.single(ByteString(clearText))

  val encodedKey: String = "KCl02Tjzsid09VnDl6CDpDlnm4G4VUJr8l6PNg+MHkQ="
  val decodedKey = Base64.getDecoder.decode(encodedKey)
  val key = new SecretKeySpec(decodedKey, 0, decodedKey.length, "AES")

  val encodedIv: String = "AA=="
  val decodedIv = Base64.getDecoder.decode(encodedIv)
  val iv = new IvParameterSpec(decodedIv)

  val src = clearSource.via(new CipherStage(key, iv, Cipher.ENCRYPT_MODE))

  implicit val system = ActorSystem("test")
  implicit val materializer = ActorMaterializer()
  src.runForeach(i => println(i))
  println(src)
}
In other words, onUpstreamFinish is called and two AES blocks are emitted (by 
the println).  So it must have something to do with the graph I’m using.  It 
looks like:

in ~> bcast ~> dgst ~> dgstOut
  bcast ~> encryptor ~> blkOut
Where in is a Source[ByteString,Any] (the payload), dgst is the 
DigestCalculator stage, encryptor is the CipherStage, and dgstOut and blkOut 
are the two Sink.head[ByteString] outputs of my graph.  bcast is a normal 
two-output Broadcast element.

Why is it that in the flow in ~> bcast ~> dgst, the onStreamFinish of dgst is 
invoked correctly, but in the flow in ~> bcast ~> encryptor, it isn’t?  

— Eric

 
> On Oct 7, 2016, at 11:37, Eric Swenson  wrote:
> 
> I have a web service which accepts an inbound payload and runs it through an 
> akka-streams pipeline that simulatenously computes the MD5 digest of the 
> payload and encrypts that payload.  I’ve implemented a GraphStage that 
> performs the crypto that looks like this:
> 
> class CipherStage(key: SecretKey, iv: IvParameterSpec, mode: Int) extends 
> GraphStage[FlowShape[ByteString, ByteString]] {
>   val in = Inlet[ByteString]("Encryptor.in")
>   val out = Outlet[ByteString]("Encryptor.out")
>   override val shape = FlowShape.of(in, out)
> 
>   override def createLogic(inheritedAttributes: Attributes): GraphStageLogic 
> = new GraphStageLogic(shape) {
> val cipher: Cipher = Cipher.getInstance("AES/CBC/PKCS5Padding")
> cipher.init(mode, key, iv)
> 
> setHandler(out, new OutHandler {
>   override def onPull(): Unit = {
> pull(in)
>   }
> })
> 
> setHandler(in, new InHandler {
>   override def onPush(): Unit = {
> val chunk = grab(in)
> emit(out, ByteString(cipher.update(chunk.toArray)))
> pull(in)
>   }
> 
>   override def onUpstreamFinish(): Unit = {
> val chunk = cipher.doFinal()
> emit(out, ByteString(chunk))
> completeStage()
>   }
> 
>   override def onUpstreamFailure(ex: Throwable): Unit = {
> log.info <http://log.info/>(s”onUpstreamFailure: $ex")
> failStage(ex)
>   }
> })
>   }
> }
> I’m using it (and a similar digest-computing stage like this:
> 
> def verifyAndStoreBlock(digest: String, byteStringSource: Source[ByteString, 
> Any], userUid: String, sender: ActorRef) = {
>   val digestOut = Sink.head[ByteString]
>   val blockOut = Sink.head[ByteString]
> 
>   val graph: Graph[ClosedShape, (Future[ByteString], Future[ByteString])] = 
> GraphDSL.create(digestOut, blockOut)((_,_)) { implicit builder => (dgstOut, 
> blkOut) =>
> import GraphDSL.Implicits._
> val in: Source[ByteString,Any] = byteStringSource
> val bcast = builder.add(Broadcast[ByteString](2))
> val dgst = new DigestCalculator("MD5")
> val encryptor = new CipherStage(blockEncryptionKey, 
> blockInitializationVector, Cipher.ENCRYPT_MODE)
> 
> in ~> bcast ~> dgst ~> dgstOut
>   bcast ~> encryptor ~> blkOut
> 
> ClosedShape
>   }
> 
>   val rg = RunnableGraph.fromGraph[(Future[ByteString], 
> Future[ByteString])](graph)
> 
>   implicit val materializer = ActorMaterializer()
>   val (dgstOutFuture, blkOutFuture) = rg.run()
>   implicit val ec = context.dispatcher
>   for {
> dgst <- dgstOutFuture
> blkOut <- blkOutFuture
>   } {
> val verify = byteStringToHexString(dgst)
> if (verify == digest) {
>   val dataStoreActor = context.actorOf(DataStoreActor.props(dataStore))
>   dataStoreActor ! DataStoreActor.Messages.SaveBlock(digest, blkOut)
>   blockStoreLogger.logEvent(PutBlockEvent(digest, userUid))
>   sender ! Right(())
> } else {
>   log.warning(s"Invalid digest: supplied $digest, computed: $verify")
>   blockStoreLogger.logEvent(PutBlockFailed

[akka-user] Re: onUpstreamFinish not getting called

2016-10-07 Thread Eric Swenson
ing called, it just results 
in an exception on the “push” call.  I absolutely need to be able to “push” 
during onStreamFinish, since with AES/CBC encryption, there are more stream 
elements after the last input that must be emitted.

Anyone see what is wrong?  — Eric

> On Oct 7, 2016, at 12:02, Eric Swenson  wrote:
> 
> I wrote a simple test for my CipherStage and it appears to work fine:
> 
> "CipherStage" should "work" in {
>   val clearText = "0123456789abcdef"
>   val clearSource = Source.single(ByteString(clearText))
> 
>   val encodedKey: String = "KCl02Tjzsid09VnDl6CDpDlnm4G4VUJr8l6PNg+MHkQ="
>   val decodedKey = Base64.getDecoder.decode(encodedKey)
>   val key = new SecretKeySpec(decodedKey, 0, decodedKey.length, "AES")
> 
>   val encodedIv: String = "AA=="
>   val decodedIv = Base64.getDecoder.decode(encodedIv)
>   val iv = new IvParameterSpec(decodedIv)
> 
>   val src = clearSource.via(new CipherStage(key, iv, Cipher.ENCRYPT_MODE))
> 
>   implicit val system = ActorSystem("test")
>   implicit val materializer = ActorMaterializer()
>   src.runForeach(i => println(i))
>   println(src)
> }
> In other words, onUpstreamFinish is called and two AES blocks are emitted (by 
> the println).  So it must have something to do with the graph I’m using.  It 
> looks like:
> 
> in ~> bcast ~> dgst ~> dgstOut
>   bcast ~> encryptor ~> blkOut
> Where in is a Source[ByteString,Any] (the payload), dgst is the 
> DigestCalculator stage, encryptor is the CipherStage, and dgstOut and blkOut 
> are the two Sink.head[ByteString] outputs of my graph.  bcast is a normal 
> two-output Broadcast element.
> 
> Why is it that in the flow in ~> bcast ~> dgst, the onStreamFinish of dgst is 
> invoked correctly, but in the flow in ~> bcast ~> encryptor, it isn’t?  
> 
> — Eric
> 
>  
>> On Oct 7, 2016, at 11:37, Eric Swenson > <mailto:e...@swenson.org>> wrote:
>> 
>> I have a web service which accepts an inbound payload and runs it through an 
>> akka-streams pipeline that simulatenously computes the MD5 digest of the 
>> payload and encrypts that payload.  I’ve implemented a GraphStage that 
>> performs the crypto that looks like this:
>> 
>> class CipherStage(key: SecretKey, iv: IvParameterSpec, mode: Int) extends 
>> GraphStage[FlowShape[ByteString, ByteString]] {
>>   val in = Inlet[ByteString]("Encryptor.in")
>>   val out = Outlet[ByteString]("Encryptor.out")
>>   override val shape = FlowShape.of(in, out)
>> 
>>   override def createLogic(inheritedAttributes: Attributes): GraphStageLogic 
>> = new GraphStageLogic(shape) {
>> val cipher: Cipher = Cipher.getInstance("AES/CBC/PKCS5Padding")
>> cipher.init(mode, key, iv)
>> 
>> setHandler(out, new OutHandler {
>>   override def onPull(): Unit = {
>> pull(in)
>>   }
>> })
>> 
>> setHandler(in, new InHandler {
>>   override def onPush(): Unit = {
>> val chunk = grab(in)
>> emit(out, ByteString(cipher.update(chunk.toArray)))
>> pull(in)
>>   }
>> 
>>   override def onUpstreamFinish(): Unit = {
>> val chunk = cipher.doFinal()
>> emit(out, ByteString(chunk))
>> completeStage()
>>   }
>> 
>>   override def onUpstreamFailure(ex: Throwable): Unit = {
>> log.info <http://log.info/>(s”onUpstreamFailure: $ex")
>> failStage(ex)
>>   }
>> })
>>   }
>> }
>> I’m using it (and a similar digest-computing stage like this:
>> 
>> def verifyAndStoreBlock(digest: String, byteStringSource: Source[ByteString, 
>> Any], userUid: String, sender: ActorRef) = {
>>   val digestOut = Sink.head[ByteString]
>>   val blockOut = Sink.head[ByteString]
>> 
>>   val graph: Graph[ClosedShape, (Future[ByteString], Future[ByteString])] = 
>> GraphDSL.create(digestOut, blockOut)((_,_)) { implicit builder => (dgstOut, 
>> blkOut) =>
>> import GraphDSL.Implicits._
>> val in: Source[ByteString,Any] = byteStringSource
>> val bcast = builder.add(Broadcast[ByteString](2))
>> val dgst = new DigestCalculator("MD5")
>> val encryptor = new CipherStage(blockEncryptionKey, 
>> blockInitializationVector, Cipher.ENCRYPT_MODE)
>> 
>> in ~> bcast ~> dgst ~> dgstOut
>>   bcast ~> encryptor ~> blkOut
>> 
>> ClosedShape
>>   }
>> 
>>   

[akka-user] Re: onUpstreamFinish not getting called

2016-10-07 Thread Eric Swenson
I found my (stupid) problem.  I had used:

val encryptedOut = Sink.head[ByteString]
instead of:

val encryptedOut: Sink[ByteString, Future[ByteString]] = Sink.fold[ByteString, 
ByteString](ByteString())(_ ++ _)
Consequently, only the first chunk of the stream was being kept.  I’m not 
entirely sure why this caused the behavior I was seeing, fixing this caused the 
onUpstreamFinish to be called.  It must been because since all output Sinks 
were Sink.head, that the sinks terminated the graph execution before the Source 
did.  — Eric

> On Oct 7, 2016, at 12:46, Eric Swenson  wrote:
> 
> I wrote a simple test case using a graph rather than using src.runForEach.  
> 
> "CipherStage in graph" should "work" in {
> 
>   val clearText = "0123456789abcdef"
>   val clearSource = Source.single(ByteString(clearText))
>   val encryptedOut = Sink.head[ByteString]
> 
>   val encodedKey: String = "KCl02Tjzsid09VnDl6CDpDlnm4G4VUJr8l6PNg+MHkQ="
>   val decodedKey = Base64.getDecoder.decode(encodedKey)
>   val key = new SecretKeySpec(decodedKey, 0, decodedKey.length, "AES")
> 
>   val encodedIv: String = "AA=="
>   val decodedIv = Base64.getDecoder.decode(encodedIv)
>   val iv = new IvParameterSpec(decodedIv)
> 
>   val graph = GraphDSL.create(encryptedOut) { implicit builder => encOut =>
> import GraphDSL.Implicits._
> 
> val in: Source[ByteString, Any] = clearSource
> val encryptor = new CipherStage(key, iv, Cipher.ENCRYPT_MODE)
> 
> in ~> encryptor ~> encOut
> ClosedShape
>   }
> 
>   val rg = RunnableGraph.fromGraph[Future[ByteString]](graph)
>   implicit val system = ActorSystem("test")
>   implicit val materializer = ActorMaterializer()
> 
>   val blkOutFuture = rg.run()
> 
>   implicit val ec = system.dispatcher
> 
>   whenReady(blkOutFuture) { blkOut =>
> blkOut should be (encrypt(key, iv, clearText.getBytes))
>   }
> }
> This test fails.  However, when I run this test case, I get the following 
> stack trace:
> 
> [ERROR] [10/07/2016 12:40:03.541] [test-akka.actor.default-dispatcher-3] 
> [akka://test/user/StreamSupervisor-0/flow-0-0-unknown-operation] Error in 
> stage [com.genecloud.blockstore.CipherStage@3e77bd3f 
> ]:
>  requirement failed: Cannot push port (Encryptor.out) twice
> java.lang.IllegalArgumentException: requirement failed: Cannot push port 
> (Encryptor.out) twice
>   at scala.Predef$.require(Predef.scala:224)
>   at akka.stream.stage.GraphStageLogic.push(GraphStage.scala:459)
>   at 
> com.example.blockstore.CipherStage$$anon$1$$anon$3.onUpstreamFinish(BlockstoreProcessor.scala:61)
>   at 
> akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:732)
>   at 
> akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:616)
>   at 
> akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:471)
>   at 
> akka.stream.impl.fusing.GraphInterpreterShell.init(ActorGraphInterpreter.scala:381)
>   at 
> akka.stream.impl.fusing.ActorGraphInterpreter.tryInit(ActorGraphInterpreter.scala:538)
>   at 
> akka.stream.impl.fusing.ActorGraphInterpreter.preStart(ActorGraphInterpreter.scala:586)
>   at akka.actor.Actor$class.aroundPreStart(Actor.scala:489)
>   at 
> akka.stream.impl.fusing.ActorGraphInterpreter.aroundPreStart(ActorGraphInterpreter.scala:529)
>   at akka.actor.ActorCell.create(ActorCell.scala:590)
>   at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
>   at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
>   at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:223)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 
> I don’t see that I am pushing twice.  Here is the current definition of 
> CipherStage:
> 
> class CipherStage(key: SecretKey, iv: IvParameterSpec, mode: Int) extends 
> GraphStage[FlowShape[ByteString, ByteString]] {
>   val in = Inlet[ByteString]("Encryptor.in")
>   val out = Outlet[ByteString]("Encryptor.out")
>   override val shape = FlowShape.of(in, out)
> 
>   val log = Logger(LoggerFactory.getLogger("CipherStage"))
> 
>   override def createLogic(inheritedAttributes: Attributes): Grap

Re: [akka-user] Re: ANNOUNCE: Akka HTTP 3.0.0-RC1

2016-10-18 Thread Eric Swenson
I thought as much, but the documentation you just posted here:  
http://doc.akka.io/docs/akka-http/current/scala/http/introduction.html 
<http://doc.akka.io/docs/akka-http/current/scala/http/introduction.html> still 
says:

Akka HTTP is provided in a separate jar file, to use it make sure to include 
the following dependency:

"com.typesafe.akka" %% "akka-http-experimental" % "3.0.0-RC1" 
— Eric

> On Oct 18, 2016, at 10:42, Konrad Malawski  
> wrote:
> 
> Yes, that's what all the fuss is about ;-)
> In fact, in RC it's already removed: 
> http://search.maven.org/#artifactdetails%7Ccom.typesafe.akka%7Cakka-http%7C3.0.0-RC1%7Cjar
>  
> <http://search.maven.org/#artifactdetails%7Ccom.typesafe.akka%7Cakka-http%7C3.0.0-RC1%7Cjar>
> 
> For people not tracking this in detail: please note that all other modules 
> other than "the DSL" have been stable for a long time already.
> 
> -- 
> Konrad `ktoso` Malawski
> Akka <http://akka.io/> @ Lightbend <http://lightbend.com/>
> On 18 October 2016 at 19:41:18, Eric Swenson (e...@swenson.org 
> <mailto:e...@swenson.org>) wrote:
> 
>> Congratulations!  Is the plan to remove the "experimental" from 
>> akka-http-experimental when this moves from RC to final?  -- Eric
>> 
>> On Monday, October 17, 2016 at 3:22:17 PM UTC-7, Konrad 'ktoso' Malawski 
>> wrote:
>> 
>> Dear hakkers,
>> 
>> We are proud to announce the first Release Candidate of the Akka
>> HTTP's "fully stable" release–the only missing, bit was the Routing
>> DSLs, which we now deem stable enough to support for an extended
>> period of time.
>> 
>> 
>> 
>> This release marks the first of the 3.0.0 series of this project
>> and signifies a large step in terms of confidence in the library,
>> as well as the move of Akka HTTP into its own repository. From now
>> on Akka HTTP will be versioned separately from Akka “core”. This
>> has been discussed at large with the community on 
>> akka-meta <http://github.com/akka/akka-meta>,
>> and the 
>> akka-http <https://github.com/akka/akka-http/> 
>> repositories on github. Thank you very much for your
>> input!
>> 
>> 
>> For more background why this move, please read “Akka
>> HTTP - stable, growing and tons of
>> opportunity <https://github.com/akka/akka-meta/issues/27>”
>> on akka-meta. While preparing In the meantime we have delivered a
>> Proof-of-Concept of HTTP/2 for Akka HTTP and plan to continue this
>> work later this year–community help is very much welcome on this
>> front as well.
>> 
>> 
>> The documentation from now on will be available here:
>> 
>> 
>> Some noteworthy changes in the 
>> 3.0.0-RC1 
>> (since it's move out from 2.4.11) release are:
>> 
>> 
>> New lightbend/paradox powered documentation
>> 
>> This will allow us to aggregate it together with Akka and other
>> documentation, as well as link more easily to ScalaDoc
>> pages
>> 
>> Akka HTTP documentation will from now on live here: 
>> http://doc.akka.io/docs/akka-http/current/index.html 
>> <http://doc.akka.io/docs/akka-http/current/index.html>
>> 
>> We’ll work on a better theme for it very soon.
>> 
>> Multipart 
>> is now correctly Binary MediaType (instead of
>> WithOpenCharset) 
>> #398 <https://github.com/akka/akka-http/pull/398>
>> 
>> A new designated mailing-list and page for any critical security
>> issues that might come up has been created: 
>> http://doc.akka.io/docs/akka-http/current/security.html 
>> <http://doc.akka.io/docs/akka-http/current/security.html>
>> 
>> Please follow the linked mailing list if you have production Akka
>> systems, so you’ll be the first to know in case a security issue is
>> found and fixed in Akka.
>> 
>> 
>> The plan regarding releasing a stable 3.0.0 is to wait a little bit
>> for community feedback on the release candidates, and call a stable
>> one no longer than a few weeks from now. We’re eagerly awaiting
>> your feedback and can’t wait to ship the stable version of all of
>> Akka HTTP’s modules!
>> 
>> 
>> Credits
>> 
>> A total 15 issues were closed since 2.4.11, most of the work was
>> moving source code, documentation and issues to their new
>> places.
>> 
>> 
>> 
>> The complete list of closed issues can be found on the 
>> 3.0.0-RC1 <https://github.com/akka/akka-http/milestone/1?closed=1> 
>> milestone 

[akka-user] Re: ANNOUNCE: Akka HTTP 3.0.0-RC1

2016-10-18 Thread Eric Swenson
Congratulations!  Is the plan to remove the "experimental" from 
akka-http-experimental when this moves from RC to final?  -- Eric

On Monday, October 17, 2016 at 3:22:17 PM UTC-7, Konrad 'ktoso' Malawski 
wrote:
>
> Dear hakkers,
>
> We are proud to announce the first Release Candidate of the Akka HTTP's 
> "fully stable" release–the only missing, bit was the Routing DSLs, which we 
> now deem stable enough to support for an extended period of time.
>
>
> This release marks the first of the 3.0.0 series of this project and 
> signifies a large step in terms of confidence in the library, as well as 
> the move of Akka HTTP into its own repository. From now on Akka HTTP will 
> be versioned separately from Akka “core”. This has been discussed at large 
> with the community on akka-meta , and 
> the akka-http  repositories on 
> github. Thank you very much for your input!
>
> For more background why this move, please read “Akka HTTP - stable, 
> growing and tons of opportunity 
> ” on akka-meta. While 
> preparing In the meantime we have delivered a Proof-of-Concept of HTTP/2 
> for Akka HTTP and plan to continue this work later this year–community help 
> is very much welcome on this front as well.
>
> The documentation from now on will be available here: 
>
> Some noteworthy changes in the *3.0.0-RC1* (since it's move out from 
> 2.4.11) release are:
>
>
>- 
>
>New lightbend/paradox powered documentation
>- 
>   
>   This will allow us to aggregate it together with Akka and other 
>   documentation, as well as link more easily to ScalaDoc pages
>   - 
>   
>   Akka HTTP documentation will from now on live here: 
>   http://doc.akka.io/docs/akka-http/current/index.html
>   - 
>   
>   We’ll work on a better theme for it very soon.
>   - 
>
>Multipart is now correctly Binary MediaType (instead of 
>WithOpenCharset) #398 
>- 
>
>A new designated mailing-list and page for any critical security 
>issues that might come up has been created: 
>http://doc.akka.io/docs/akka-http/current/security.html 
>- 
>   
>   Please follow the linked mailing list if you have production Akka 
>   systems, so you’ll be the first to know in case a security issue is 
> found 
>   and fixed in Akka.
>   
>
> The plan regarding releasing a stable 3.0.0 is to wait a little bit for 
> community feedback on the release candidates, and call a stable one no 
> longer than a few weeks from now. We’re eagerly awaiting your feedback and 
> can’t wait to ship the stable version of all of Akka HTTP’s modules!
>
> Credits
>
> A total 15 issues were closed since 2.4.11, most of the work was moving 
> source code, documentation and issues to their new places.
>
> The complete list of closed issues can be found on the 3.0.0-RC1 
>  milestone on 
> github.
>
> For this release we had the help of 14 committers – thank you!
>
> A special thanks to Jonas Fonseca  who did a 
> tremendously awesome job at migrating all the docs from sphinx 
> (restructuredtext) to paradox (markdown), contributing features that the 
> Akka docs needed to upstream Paradox–thanks a lot!
>
> Credits:
>
> commits added removed
>
>   10   22489   24696 Jonas Fonseca
>
>   101927 256 Johannes Rudolph
>
>   10 849 412 Konrad Malawski
>
>4 448 136 Robert Budźko
>
>2  37  37 Bernard Leach
>
>2 107   7 Richard Imaoka
>
>2  26  24 Jakub Kozłowski
>
>1 145 101 Jan @gosubpl
>
>1 108 114 Derek Wyatt
>
>1  45  33 Wojciech Langiewicz
>
>1  49   0 @2beaucoup
>
>1   6   6 Markus Hauck
>
>1   1   1 Ian Forsey
>
>1   1   1 Johan Andrén
>
>
> Happy hakking!
>
> – The Akka Team
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Memory Leak : akka.actor.RepointableActorRef

2017-02-03 Thread Eric Swenson
I have an akka-http/akka-streams application that I’ve recently upgraded to 
10.0.3.  After handling many requests, it runs out of memory.  Using the 
Eclipse MAT, I see this message:

One instance of "akka.actor.RepointableActorRef" loaded by 
"sun.misc.Launcher$AppClassLoader @ 0x8b58" occupies 1,815,795,896 (98.53%) 
bytes. The memory is accumulated in one instance of 
"scala.collection.immutable.RedBlackTree$BlackTree" loaded by 
"sun.misc.Launcher$AppClassLoader @ 0x8b58".

Keywords
akka.actor.RepointableActorRef
scala.collection.immutable.RedBlackTree$BlackTree
sun.misc.Launcher$AppClassLoader @ 0x8b58

Does this ring any bells?  How might I track down what is causing this?  

I *believe* that I’ve stressed this service before (in a similar way) and NOT 
seen this failure. I think I was running 10.0.1 and upgraded to 10.0.2 and then 
10.0.3 before running the test again.

I don’t really know much about MAT (first time user), but I believe what the 
“Shortest Paths To The Accumulation Point” report is telling me that 
akka.stream.impl.ActorMaterializerImpl is what is creating the 
RepointableActorRef.  I am using akka-streams and I am passing blocks of data 
of 1MB size through the streams.  But as far as I know, I shouldn’t be 
accumulating them.  Also, I have successfully run a test of this magnitude 
before without running out of memory.  

Any suggestions?

— Eric



-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Memory Leak : akka.actor.RepointableActorRef

2017-02-06 Thread Eric Swenson
Hi Johannes,

Yes, it appears reproducible. I'm trying to narrow down whether this 
started happening when I upgraded to 10.0.2 or 10.0.3 of akka-http. When I 
learn this, I'll let you know.  

I can share the memory dump with you. Could you write me via private email 
with the best way to get it to you?  -- Eric

On Friday, February 3, 2017 at 3:02:19 PM UTC-8, Johannes Rudolph wrote:
>
> Hi Eric,
>
> we'd like to look into that. It looks as if a streams materializer is 
> holding on to some memory but we need more info to see what it keeps 
> exactly to.
>
> Is this a reproducible scenario? Could you share the memory dump (in 
> private) with us? Otherwise, could you send the list of top consumers (by 
> numbers and / or bytes) as seen in MAT?
>
> Thanks,
> Johannes
>
> On Friday, February 3, 2017 at 2:56:04 PM UTC-7, Eric Swenson wrote:
>>
>> I have an akka-http/akka-streams application that I’ve recently upgraded 
>> to 10.0.3.  After handling many requests, it runs out of memory.  Using the 
>> Eclipse MAT, I see this message:
>>
>> One instance of *"akka.actor.RepointableActorRef"* loaded by 
>> *"sun.misc.Launcher$AppClassLoader 
>> @ 0x8b58"* occupies *1,815,795,896 (98.53%)* bytes. The memory is 
>> accumulated in one instance of 
>> *"scala.collection.immutable.RedBlackTree$BlackTree"* loaded by 
>> *"sun.misc.Launcher$AppClassLoader 
>> @ 0x8b58"*.
>>
>> *Keywords*
>> akka.actor.RepointableActorRef
>> scala.collection.immutable.RedBlackTree$BlackTree
>> sun.misc.Launcher$AppClassLoader @ 0x8b58
>>
>> Does this ring any bells?  How might I track down what is causing this?  
>>
>> I *believe* that I’ve stressed this service before (in a similar way) and 
>> NOT seen this failure. I think I was running 10.0.1 and upgraded to 10.0.2 
>> and then 10.0.3 before running the test again.
>>
>> I don’t really know much about MAT (first time user), but I believe what 
>> the “Shortest Paths To The Accumulation Point” report is telling me that 
>> akka.stream.impl.ActorMaterializerImpl is what is creating the 
>> RepointableActorRef.  I am using akka-streams and I am passing blocks of 
>> data of 1MB size through the streams.  But as far as I know, I shouldn’t be 
>> accumulating them.  Also, I have successfully run a test of this magnitude 
>> before without running out of memory.  
>>
>> Any suggestions?
>>
>> — Eric
>>
>>
>>
>>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka-http Http.outgoingConnectionHttps and self-signed certs

2017-05-24 Thread Eric Swenson
Hi Joan,

To be honest, I haven’t looked at this stuff in ages.  We set up our own PKI CA 
and switched to using non-self-signed certs.  However, before we did this, I 
had gotten things working with self-signed certs.  Unfortunately, I don’t have 
that code in active usage anywhere now.  I did find the following commented-out 
code in one version of a service.   Not sure if it works and even compiles with 
the latest Akka.  But here it is in its ugly entirety:

/*

  // temporary code because we're using self-signed certs
  val classLoader: ClassLoader = Thread.currentThread().getContextClassLoader
  val jksInputStream = classLoader.getResourceAsStream(“dev.jks")
  val jksTempFilePathname = Files.createTempFile("jksTemp", "jks").toString
  val jksOutputStream = new FileOutputStream(jksTempFilePathname)
  try {
IOUtils.copy(jksInputStream, jksOutputStream)
  } finally {
jksInputStream.close
  }

  val trustStoreConfig = TrustStoreConfig(None, Some(jksTempFilePathname))
  val trustManagerConfig = 
TrustManagerConfig().withTrustStoreConfigs(List(trustStoreConfig))

  val looseConfig = SSLLooseConfig().withDisableHostnameVerification(true)

  val sslConfig = AkkaSSLConfig().mapSettings { s =>
s.withLoose(looseConfig).withTrustManagerConfig(trustManagerConfig)
  }

*/

> On May 24, 2017, at 04:46, Joan G  wrote:
> 
> What happened at the end?
> I'm having the same issue and find insane that disabling all the security 
> (just to identify where is the issue) is still not working.
> 
> On Wednesday, 18 May 2016 22:13:03 UTC+1, Eric Swenson wrote:
> Apart from my prior point — that it is not practical for my test environment 
> to configure all the trust anchors (self signed cert signer), I decided to 
> try it anyhow for a single self-signed cert. I still am having issues: here 
> is the code:
> 
> val trustStoreConfig = TrustStoreConfig(None, 
> Some("/Users/eswenson/self-signed-cert.jks"))
>   val trustManagerConfig = 
> TrustManagerConfig().withTrustStoreConfigs(List(trustStoreConfig))
> 
>   val looseConfig = SSLLooseConfig().withAcceptAnyCertificate(true).
> withDisableHostnameVerification(true).
> withAllowLegacyHelloMessages(Some(true)).
> withAllowUnsafeRenegotiation(Some(true)).
> withAllowWeakCiphers(true).
> withAllowWeakProtocols(true).
> withDisableSNI(true)
> 
>   val sslConfig = AkkaSSLConfig().mapSettings(s =>
>  s.withLoose(looseConfig).withTrustManagerConfig(trustManagerConfig)
>   )
> 
>   val connectionContext = Http().createClientHttpsContext(sslConfig)
> 
>   lazy val connectionFlow: Flow[HttpRequest, HttpResponse, Any] =
> Http().outgoingConnectionHttps(host, port, connectionContext)
> 
>   def httpSRequest(request: HttpRequest): Future[HttpResponse] =
> Source.single(request).via(connectionFlow).runWith(Sink.head)
> 
> As you can see, I’m using a trust store with the self-signed cert in it.  
> Even with the trust store and enabling all the loose config options (I tried 
> it without looseConfig to no avail), I’m still getting errors:
> 
> background log: info: [INFO] [05/17/2016 18:49:17.574] 
> [ClusterSystem-akka.actor.default-dispatcher-25] 
> [ExperimentInstance(akka://ClusterSystem <>)] fetchExperiment: 
> exception=akka.stream.ConnectionException: Hostname verification failed! 
> Expected session to be for 
> xxx-GfsElb-1RLMB4EAK0HUM-785838730.us-west-2.elb.amazonaws.com 
> <http://xxx-gfselb-1rlmb4eak0hum-785838730.us-west-2.elb.amazonaws.com/>
> background log: error: akka.stream.ConnectionException: Hostname verification 
> failed! Expected session to be for 
> xxx-GfsElElb-1RLMB4EAK0HUM-785838730.us-west-2.elb.amazonaws.com 
> <http://xxx-gfselelb-1rlmb4eak0hum-785838730.us-west-2.elb.amazonaws.com/>
> 
> Why is it doing any host name verification?  The loose config specifies:
> 
>  withDisableHostnameVerification(true)
> 
> I’m finding it hard to believe it is this hard to do HTTPS with self-signed 
> certs.  Any suggestions?
> 
> — Eric
> 
>> On May 17, 2016, at 5:11 PM, Eric Swenson swenson.org 
>> <http://swenson.org/>> wrote:
>> 
>> I don't want or need to configure a specific trust anchor. I want to be able 
>> to do the equivalent of "curl -k" on a set of local servers, with different 
>> signing certs. I would have thought the loose "acceptAnyCertificate" would 
>> have been precisely for this.  What does that setting do?
>> 
>> If the only way to allow self-signed certs is through setting up a trust 
>> store, I can do that.
>> 
>> 
>> -- Eric
>> 
>> 
>> On May 17, 2016, at 16:30, Konrad Malawski lightbend.com 
>> <http://lightben