Re: [akka-user] yourkit CPU ignore list

2017-10-10 Thread Sam Halliday
Should I be worried about this kind of backtrace? None of it is rooted
in my application's code but it's the biggest CPU hog (once standard
YourKit filters are applied for scala / jdk / netty / etc).

I'm seeing 3,400 of these after a small perftest run. I'm not sure if
it's (expected) noise or if something is a def that should be a val.

"Reverse Call Tree","Time (ms)","Count","Level"
"akka.stream.impl.PhasedFusingActorMaterializer.materialize(Graph,
Attributes, Phase, Map) PhasedFusingActorMaterializer.scala (The
method calls itself recursively)","550834","3410","1"
"akka.stream.impl.PhasedFusingActorMaterializer.materialize(Graph,
Attributes) PhasedFusingActorMaterializer.scala:424","","","2"
"akka.stream.impl.PhasedFusingActorMaterializer.materialize(Graph)
PhasedFusingActorMaterializer.scala:415","","","3"
"akka.stream.scaladsl.RunnableGraph.run(Materializer) Flow.scala:496","","","4"
"play.api.libs.streams.Accumulator$.$anonfun$futureToSink$2(Materializer,
Publisher, Accumulator) Accumulator.scala:245","540375","1651","5"
"play.api.libs.streams.Accumulator$$$Lambda$1150.apply(Object)","","","6"
"scala.concurrent.impl.CallbackRunnable.run() Promise.scala:60","","","7"
"play.api.libs.streams.Execution$trampoline$.executeScheduled()
Execution.scala:109","","","8"
"play.api.libs.streams.Execution$trampoline$.execute(Runnable)
Execution.scala:71","","","9"
"scala.concurrent.impl.CallbackRunnable.run() Promise.scala:60","","","10"
"akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor$AbstractBatch)
BatchingExecutor.scala:55","","","11"
"akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor$BlockableBatch,
boolean) BatchingExecutor.scala:91","","","12"
"akka.dispatch.BatchingExecutor$BlockableBatch$$Lambda$1071.apply$mcV$sp()","","","13"
"scala.concurrent.BlockContext$.withBlockContext(BlockContext,
Function0) BlockContext.scala:81","","","14"
"akka.dispatch.BatchingExecutor$BlockableBatch.run()
BatchingExecutor.scala:91","","","15"
"akka.dispatch.TaskInvocation.run() AbstractDispatcher.scala:40","","","16"
"akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec()
ForkJoinExecutorConfigurator.scala:43","","","17"
"akka.dispatch.forkjoin.ForkJoinTask.doExec() ForkJoinTask.java:260","","","18"
"akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinTask)
ForkJoinPool.java:1339","","","19"
"akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool$WorkQueue)
ForkJoinPool.java:1979","","","20"
"akka.dispatch.forkjoin.ForkJoinWorkerThread.run()
ForkJoinWorkerThread.java:107","","","21"
"play.api.libs.streams.SinkAccumulator.run(Source, Materializer)
Accumulator.scala:135","9044","1665","5"
"akka.http.scaladsl.HttpExt.$anonfun$bindAndHandle$1(ServerSettings,
LoggingAdapter, Materializer, Flow, Tcp$IncomingConnection)
Http.scala:180","1414","94","5"



On 10 October 2017 at 11:33, Konrad “ktoso” Malawski
 wrote:
> Well, it’s stream materialization, it takes time, you may want to know ;)
>
> —
> Konrad `kto.so` Malawski
> Akka @ Lightbend
>
> On 10 October 2017 at 19:32:21, Sam Halliday (sam.halli...@gmail.com) wrote:
>
> What about PhasedFusingActorMaterializer.materialize? Or does that
> indicate a problem with the way play is setup?
>
>
>
> On 10 October 2017 at 10:49, Konrad “ktoso” Malawski
>  wrote:
>> ForkJoin noice mostly – people sometimes panic about ForkJoinPool#scan
>> specifically (which shows up when there’s nothing to do, so it spins
>> looking
>> for work).
>>
>>
>> —
>> Konrad `kto.so` Malawski
>> Akka @ Lightbend
>>
>> On 10 October 2017 at 18:47:37, Sam Halliday (sam.halli...@gmail.com)
>> wrote:
>>
>> Hi all,
>>
>> Yourkit's CPU profiler has the ability to hide classes / symbols.
>>
>> There is typically a lot of CPU noise when profiling an akka application
>> (Thread.sleep, ForkJoinPool, etc etc),

Re: [akka-user] yourkit CPU ignore list

2017-10-10 Thread Sam Halliday
What about PhasedFusingActorMaterializer.materialize? Or does that
indicate a problem with the way play is setup?



On 10 October 2017 at 10:49, Konrad “ktoso” Malawski
 wrote:
> ForkJoin noice mostly – people sometimes panic about ForkJoinPool#scan
> specifically (which shows up when there’s nothing to do, so it spins looking
> for work).
>
>
> —
> Konrad `kto.so` Malawski
> Akka @ Lightbend
>
> On 10 October 2017 at 18:47:37, Sam Halliday (sam.halli...@gmail.com) wrote:
>
> Hi all,
>
> Yourkit's CPU profiler has the ability to hide classes / symbols.
>
> There is typically a lot of CPU noise when profiling an akka application
> (Thread.sleep, ForkJoinPool, etc etc), does anybody have a good "ignore
> list" of methods that shouldn't matter?
>
> Best regards,
> Sam
> --
>>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>>> Check the FAQ:
>>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] yourkit CPU ignore list

2017-10-10 Thread Sam Halliday
Hi all,

Yourkit's CPU profiler has the ability to hide classes / symbols.

There is typically a lot of CPU noise when profiling an akka application 
(Thread.sleep, ForkJoinPool, etc etc), does anybody have a good "ignore 
list" of methods that shouldn't matter?

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] TestKit ignore dilation for expectNoMsg?

2015-07-27 Thread Sam Halliday
I'm testing the timing effects, each iteration of the loop does something, 
asserts a message (wait with dilation), asserts no message.

It is not critically important that we wait for the dilation in the "no 
message" assertion because what we're really asserting on here is that the 
message queue hasn't been filled up by a rogue sender.

On Monday, 27 July 2015 14:39:41 UTC+1, rkuhn wrote:
>
> Hi Sam,
>
> usually it is more efficient to replace the expectNoMsg() inside loops 
> with just expecting the next message—unless you are specifically testing 
> the timing aspects. Extraneous messages will be detected since they should 
> fail the expectation or a single (final) expectNoMsg at the very end of the 
> test case.
>
> Regards,
>
> Roland
>
> 24 jul 2015 kl. 10:52 skrev Sam Halliday  >:
>
> Hi all,
>
> Is it possible to override / bypass the time dilation introduced by the 
> akka.test.timefactor for some tests? I do some expectNoMsg tests in loops 
> which is fine without dilation, but when running on a really slow CI 
> machine it introduces a lot of waiting with very little value... I really 
> want to just increase the timeout for expectMsg-type assertions to catch 
> the long-tails.
>
> Best regards,
> Sam
>
> -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> *Dr. Roland Kuhn*
> *Akka Tech Lead*
> Typesafe <http://typesafe.com/> – Reactive apps on the JVM.
> twitter: @rolandkuhn
> <http://twitter.com/#!/rolandkuhn>
>  
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] TestKit ignore dilation for expectNoMsg?

2015-07-24 Thread Sam Halliday
Hi all,

Is it possible to override / bypass the time dilation introduced by the 
akka.test.timefactor for some tests? I do some expectNoMsg tests in loops 
which is fine without dilation, but when running on a really slow CI 
machine it introduces a lot of waiting with very little value... I really 
want to just increase the timeout for expectMsg-type assertions to catch 
the long-tails.

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: ANNOUNCE: Akka Streams & HTTP 1.0

2015-07-15 Thread Sam Halliday
Great! We already have a branch in ENSIME to expose our JSON protocol 
(JERKY) over akka-http REST and WebSockets:

https://github.com/fommil/ensime-server/blob/websocket/server/src/main/scala/org/ensime/server/WebSocketBoilerplate.scala


On Wednesday, 15 July 2015 13:40:25 UTC+1, Konrad Malawski wrote:
>
> Dear hakkers,
>
> we—the Akka committers—are very pleased to announce the final release of 
> Akka Streams & HTTP 1.0. After countless hours and many months of work we 
> now consider Streams & HTTP good enough for evaluation and production use, 
> subject to the caveat on performance below. We will continue to improve the 
> implementation as well as to add features over the coming months, which 
> will be marked as 1.x releases—in particular concerning HTTPS support 
> (exposing certificate information per request and allowing session 
> renegotiation) and websocket client features—before we finally add these 
> new modules to the 2.4 development branch. In the meantime both Streams and 
> HTTP can be used with Akka 2.4 artifacts since these are binary backwards 
> compatibility with Akka 2.3.
> A Note on Performance
>
> Version 1.0 is fully functional but not yet optimized for performance. To 
> make it very clear: Spray currently is a lot faster at serving HTTP 
> responses than Akka HTTP is. We are aware of this and we know that a lot of 
> you are waiting to use it in anger for high-performance applications, but 
> we follow a “correctness first” approach. After 1.0 is released we will 
> start working on performance benchmarking and optimization, the focus of 
> the 1.1 release will be on closing the gap to Spray.
> What Changed since 1.0–RC4
>
>- 
>
>Plenty documentation improvements on advanced stages 
>, modularity 
> and Http javadsl 
>,
>- 
>
>Improvements to Http stability under high load 
>,
>- 
>
>The streams cook-book translated to Java 
>,
>- 
>
>A number of new stream operators: recover 
> and generalized UnzipWith 
> contributed by Alexander 
>Golubev,
>- 
>
>The javadsl for Akka Http  is 
>now nicer to use from Java 8 and when returning Futures,
>- 
>
>also Akka Streams and Http should now be properly packaged for OSGi 
>, thanks to Rafał Krzewski.
>
> The complete list of closed tickets can be found in the 1.0 milestones of 
> streams  
> and http  on 
> github.
> Release Statistics
>
> Since the RC4 release:
>
>- 
>
>32 tickets closed
>- 
>
>252 files changed, 16861 insertions (+), 1834 deletions(-),
>- 
>
>… and a total of 9 contributors!
>
> commits added removed
>
>   262342 335 Johannes Rudolph
>   11   10112  97 Endre Sándor Varga
>9 757 173 Martynas Mickevičius
>82821 487 Konrad Malawski
>3  28  49 2beaucoup
>3 701 636 Viktor Klang
>2  43   7 Rafał Krzewski
>2 801  42 Alexander Golubev
>1   8   8 Heiko Seeberger
>
> -- 
>
> Cheers,
>
> Konrad 'ktoso’ Malawski
>
> Akka  @ Typesafe 
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How to start an akka-http server with a Route

2015-07-12 Thread Sam Halliday
There seems to be some magical implicit conversion going on here, if I add 

  def flow: Flow[HttpRequest, HttpResponse, Unit] = route

to my service trait, I can get the Flow. Unfortunately I have absolutely no 
idea where this implicit is coming from because the akka-http routing DSL 
basically kills the presentation compiler (even worse than spray-routing 
did, which was pretty bad to begin with) so type / implicit information is 
rare to come by.

Best regards,
Sam

On Sunday, 12 July 2015 14:33:25 UTC+1, Sam Halliday wrote:
>
> Hi all,
>
> I'm trying to migrate from Spray to Akka-HTTP and I'm failing to be able 
> to compile this most basic of examples:
>
>   
> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC4/scala/http/routing-dsl/index.html
>
> The API for bindAndHandle must have changed because I have a `Route` but 
> it wants a `Flow[HttpRequest,HttpResponse,Any]`
>
> Is there some command that I can run to convert `Route` into the flow that 
> is required? (Or perhaps add a convenience on `Http` for this purpose).
>
>
> I'm looking at the release branch
>
>   
> https://github.com/akka/akka/tree/releasing-akka-stream-and-http-experimental-1.0-RC4
>
> and I can't find any examples of how to actually start a server using the 
> Scala variant of the API.
>
>
>
> Best regards,
> Sam
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] How to start an akka-http server with a Route

2015-07-12 Thread Sam Halliday
Hi all,

I'm trying to migrate from Spray to Akka-HTTP and I'm failing to be able to 
compile this most basic of examples:

  
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC4/scala/http/routing-dsl/index.html

The API for bindAndHandle must have changed because I have a `Route` but it 
wants a `Flow[HttpRequest,HttpResponse,Any]`

Is there some command that I can run to convert `Route` into the flow that 
is required? (Or perhaps add a convenience on `Http` for this purpose).


I'm looking at the release branch

  
https://github.com/akka/akka/tree/releasing-akka-stream-and-http-experimental-1.0-RC4

and I can't find any examples of how to actually start a server using the 
Scala variant of the API.



Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] scaladoc artefact in sbt updateClassifiers

2015-07-11 Thread Sam Halliday
Hi all,

I often find myself wanting to read the scaladocs for Akka using the 
built-in ENSIME documentation browser (e.g. "show me docs of the symbol at 
point"). But the doc artefact that comes with Akka, via the standard sbt 
`updateClassifiers` call, seems to be the Javadocs, not the Scaladocs. For 
developing in Scala, these are not as good so it gets quite awkward.

Are you publishing the scaladocs to maven central? If so, is there some 
special classifier we need to be asking for? Admittedly, we are asking for 
the javadoc classifier, but it is very standard to use this for scaladocs

https://github.com/ensime/ensime-sbt/blob/master/src/main/scala/EnsimePlugin.scala#L179

I should imagine that adding another classifier lookup to the sbt generator 
would slow things down for people significantly. It's already depressingly 
slow.

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: unrecoverable Tcp connection with ByteString of moderate size (~250k)

2015-07-06 Thread Sam Halliday
This part of Register is also relevant

   * @param useResumeWriting If this is set to true then the connection 
actor
   *will refuse all further writes after issuing a 
[[CommandFailed]]
   *notification until [[ResumeWriting]] is received. This 
can
   *be used to implement NACK-based write backpressure.

this implies that if useResumeWriting=false then further writes after a 
CommandFailed *will* be successful when the backlog is cleared. Is this 
correct?


On Monday, 6 July 2015 11:18:13 UTC+1, Sam Halliday wrote:
>
> Aha! Think I got it.
>
> I need to send ResumeWriting and listen for a WritingResumed, to recover 
> from such a situation.
>
> On Monday, 6 July 2015 11:01:23 UTC+1, Sam Halliday wrote:
>>
>> I should say, that in every example I have tried... with payloads up to 
>> 30MB for the initial un-acked message, it always arrives at the 
>> destination. The subsequent, and retried, but the connection is completely 
>> dead and the Ack-ed message can never be sent.
>>
>> On Monday, 6 July 2015 10:59:38 UTC+1, Sam Halliday wrote:
>>>
>>> Dear all,
>>>
>>> I had a few questions for this list last week regarding an unrecoverable 
>>> error condition that I was seeing in Wandoulabs WebSockets.
>>>
>>> I have now isolated the problem to standard Scala 2.11.7 / Akka 2.3.11 
>>> Tcp.Write with the minimal example at the bottom of this email.
>>>
>>> It seems that sending a ByteString of a moderate size can basically nuke 
>>> the network connection.
>>>
>>>
>>> I am very concerned that such unrecoverable errors are possible 
>>> (reconnecting would potentially allow sending the failed message, but let's 
>>> not consider that a solution).
>>>
>>> What is even more concerning is that I have seen related problems in my 
>>> integration tests, where I am using Acking with backpressure everywhere, 
>>> but I have been unable to get a reliable reproduction of the problem. Using 
>>> Acking seems to mitigate the problem somewhat, but obviously not enough.
>>>
>>> Can somebody please have a look at this and let me know if it is a bug 
>>> or if there is some part of the Tcp.Write spec that I failed to grok. Also, 
>>> confirming if the problem exists on some network other than mine would be a 
>>> good data point. My corporate environment uses PEAP
>>>
>>>
>>> Best regards,
>>> Sam
>>>
>>>
>>> package testing
>>>
>>> import akka.actor._
>>> import akka.event.LoggingReceive
>>> import akka.io.{ IO, Tcp }
>>> import akka.util.ByteString
>>> import java.net.InetSocketAddress
>>> import java.util.UUID
>>> import concurrent.duration._
>>>
>>> /**
>>>  * This is a test of Akka IO to see if the WebSocket behaviour
>>>  * described in Buggy is a TCP problem or limited to the WebSocket
>>>  * implementation.
>>>  *
>>>  * Run a blackhole on the target machine, e.g.
>>>  *
>>>  *   nc -k -l  >/dev/null
>>>  *
>>>  * For a single session, to confirm transmission of payloads:
>>>  *
>>>  *   nc -l  > blackhole
>>>  *
>>>  * run-main testing.BuggyTcp
>>>  *
>>>  */
>>> object BuggyTcp extends App {
>>>   implicit val system = ActorSystem()
>>>
>>>   val remote = new InetSocketAddress("remote-hostname-here", )
>>>
>>>   system.actorOf(Props(classOf[BuggyTcp], remote), "client")
>>> }
>>>
>>> class BuggyTcp(remote: InetSocketAddress) extends Actor with 
>>> ActorLogging {
>>>
>>>   import Tcp._
>>>   import context.system
>>>
>>>   override def preStart(): Unit = {
>>> IO(Tcp) ! Connect(remote)
>>>   }
>>>
>>>   var connection: ActorRef = _
>>>
>>>   object Ack extends Tcp.Event with spray.io.Droppable {
>>> override def toString = "Ack"
>>>   }
>>>
>>>   def receive = {
>>> case CommandFailed(write@Tcp.Write(bytes, ack)) =>
>>>   log.error(s"failed to write ${ack}")
>>>
>>>   // perpetually retry ... does it ever correct itself?
>>>   import context.dispatcher
>>>   context.system.scheduler.scheduleOnce(1 second, connection, write)
>>>
>>> case c: Connected =>
>>>   connection = sender()
>&g

[akka-user] Re: unrecoverable Tcp connection with ByteString of moderate size (~250k)

2015-07-06 Thread Sam Halliday
Aha! Think I got it.

I need to send ResumeWriting and listen for a WritingResumed, to recover 
from such a situation.

On Monday, 6 July 2015 11:01:23 UTC+1, Sam Halliday wrote:
>
> I should say, that in every example I have tried... with payloads up to 
> 30MB for the initial un-acked message, it always arrives at the 
> destination. The subsequent, and retried, but the connection is completely 
> dead and the Ack-ed message can never be sent.
>
> On Monday, 6 July 2015 10:59:38 UTC+1, Sam Halliday wrote:
>>
>> Dear all,
>>
>> I had a few questions for this list last week regarding an unrecoverable 
>> error condition that I was seeing in Wandoulabs WebSockets.
>>
>> I have now isolated the problem to standard Scala 2.11.7 / Akka 2.3.11 
>> Tcp.Write with the minimal example at the bottom of this email.
>>
>> It seems that sending a ByteString of a moderate size can basically nuke 
>> the network connection.
>>
>>
>> I am very concerned that such unrecoverable errors are possible 
>> (reconnecting would potentially allow sending the failed message, but let's 
>> not consider that a solution).
>>
>> What is even more concerning is that I have seen related problems in my 
>> integration tests, where I am using Acking with backpressure everywhere, 
>> but I have been unable to get a reliable reproduction of the problem. Using 
>> Acking seems to mitigate the problem somewhat, but obviously not enough.
>>
>> Can somebody please have a look at this and let me know if it is a bug or 
>> if there is some part of the Tcp.Write spec that I failed to grok. Also, 
>> confirming if the problem exists on some network other than mine would be a 
>> good data point. My corporate environment uses PEAP
>>
>>
>> Best regards,
>> Sam
>>
>>
>> package testing
>>
>> import akka.actor._
>> import akka.event.LoggingReceive
>> import akka.io.{ IO, Tcp }
>> import akka.util.ByteString
>> import java.net.InetSocketAddress
>> import java.util.UUID
>> import concurrent.duration._
>>
>> /**
>>  * This is a test of Akka IO to see if the WebSocket behaviour
>>  * described in Buggy is a TCP problem or limited to the WebSocket
>>  * implementation.
>>  *
>>  * Run a blackhole on the target machine, e.g.
>>  *
>>  *   nc -k -l  >/dev/null
>>  *
>>  * For a single session, to confirm transmission of payloads:
>>  *
>>  *   nc -l  > blackhole
>>  *
>>  * run-main testing.BuggyTcp
>>  *
>>  */
>> object BuggyTcp extends App {
>>   implicit val system = ActorSystem()
>>
>>   val remote = new InetSocketAddress("remote-hostname-here", )
>>
>>   system.actorOf(Props(classOf[BuggyTcp], remote), "client")
>> }
>>
>> class BuggyTcp(remote: InetSocketAddress) extends Actor with ActorLogging 
>> {
>>
>>   import Tcp._
>>   import context.system
>>
>>   override def preStart(): Unit = {
>> IO(Tcp) ! Connect(remote)
>>   }
>>
>>   var connection: ActorRef = _
>>
>>   object Ack extends Tcp.Event with spray.io.Droppable {
>> override def toString = "Ack"
>>   }
>>
>>   def receive = {
>> case CommandFailed(write@Tcp.Write(bytes, ack)) =>
>>   log.error(s"failed to write ${ack}")
>>
>>   // perpetually retry ... does it ever correct itself?
>>   import context.dispatcher
>>   context.system.scheduler.scheduleOnce(1 second, connection, write)
>>
>> case c: Connected =>
>>   connection = sender()
>>   connection ! Register(self)
>>   log.info("sending")
>>
>>   // works
>>   //connection ! Tcp.Write(ByteString("A" * 3), NoAck("Big 
>> thing"))
>>   //connection ! Tcp.Write(ByteString(UUID.randomUUID.toString), Ack)
>>
>>   // never recovers
>>   connection ! Tcp.Write(ByteString("A" * 30), NoAck("Big thing"))
>>   connection ! Tcp.Write(ByteString(UUID.randomUUID.toString), Ack)
>>
>> case Ack =>
>>   system.shutdown()
>> case _: ConnectionClosed =>
>>   system.shutdown()
>>
>> case msg =>
>>   // WORKAROUND https://github.com/akka/akka/issues/17898
>>   // (can't use LoggingReceive)
>>   log.info(s"got a ${msg.getClass.getName}")
>>
>>   }
>>
>> }
>>
>>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: unrecoverable Tcp connection with ByteString of moderate size (~250k)

2015-07-06 Thread Sam Halliday
I should say, that in every example I have tried... with payloads up to 
30MB for the initial un-acked message, it always arrives at the 
destination. The subsequent, and retried, but the connection is completely 
dead and the Ack-ed message can never be sent.

On Monday, 6 July 2015 10:59:38 UTC+1, Sam Halliday wrote:
>
> Dear all,
>
> I had a few questions for this list last week regarding an unrecoverable 
> error condition that I was seeing in Wandoulabs WebSockets.
>
> I have now isolated the problem to standard Scala 2.11.7 / Akka 2.3.11 
> Tcp.Write with the minimal example at the bottom of this email.
>
> It seems that sending a ByteString of a moderate size can basically nuke 
> the network connection.
>
>
> I am very concerned that such unrecoverable errors are possible 
> (reconnecting would potentially allow sending the failed message, but let's 
> not consider that a solution).
>
> What is even more concerning is that I have seen related problems in my 
> integration tests, where I am using Acking with backpressure everywhere, 
> but I have been unable to get a reliable reproduction of the problem. Using 
> Acking seems to mitigate the problem somewhat, but obviously not enough.
>
> Can somebody please have a look at this and let me know if it is a bug or 
> if there is some part of the Tcp.Write spec that I failed to grok. Also, 
> confirming if the problem exists on some network other than mine would be a 
> good data point. My corporate environment uses PEAP
>
>
> Best regards,
> Sam
>
>
> package testing
>
> import akka.actor._
> import akka.event.LoggingReceive
> import akka.io.{ IO, Tcp }
> import akka.util.ByteString
> import java.net.InetSocketAddress
> import java.util.UUID
> import concurrent.duration._
>
> /**
>  * This is a test of Akka IO to see if the WebSocket behaviour
>  * described in Buggy is a TCP problem or limited to the WebSocket
>  * implementation.
>  *
>  * Run a blackhole on the target machine, e.g.
>  *
>  *   nc -k -l  >/dev/null
>  *
>  * For a single session, to confirm transmission of payloads:
>  *
>  *   nc -l  > blackhole
>  *
>  * run-main testing.BuggyTcp
>  *
>  */
> object BuggyTcp extends App {
>   implicit val system = ActorSystem()
>
>   val remote = new InetSocketAddress("remote-hostname-here", )
>
>   system.actorOf(Props(classOf[BuggyTcp], remote), "client")
> }
>
> class BuggyTcp(remote: InetSocketAddress) extends Actor with ActorLogging {
>
>   import Tcp._
>   import context.system
>
>   override def preStart(): Unit = {
> IO(Tcp) ! Connect(remote)
>   }
>
>   var connection: ActorRef = _
>
>   object Ack extends Tcp.Event with spray.io.Droppable {
> override def toString = "Ack"
>   }
>
>   def receive = {
> case CommandFailed(write@Tcp.Write(bytes, ack)) =>
>   log.error(s"failed to write ${ack}")
>
>   // perpetually retry ... does it ever correct itself?
>   import context.dispatcher
>   context.system.scheduler.scheduleOnce(1 second, connection, write)
>
> case c: Connected =>
>   connection = sender()
>   connection ! Register(self)
>   log.info("sending")
>
>   // works
>   //connection ! Tcp.Write(ByteString("A" * 3), NoAck("Big thing"))
>   //connection ! Tcp.Write(ByteString(UUID.randomUUID.toString), Ack)
>
>   // never recovers
>   connection ! Tcp.Write(ByteString("A" * 30), NoAck("Big thing"))
>   connection ! Tcp.Write(ByteString(UUID.randomUUID.toString), Ack)
>
> case Ack =>
>   system.shutdown()
> case _: ConnectionClosed =>
>   system.shutdown()
>
> case msg =>
>   // WORKAROUND https://github.com/akka/akka/issues/17898
>   // (can't use LoggingReceive)
>   log.info(s"got a ${msg.getClass.getName}")
>
>   }
>
> }
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] unrecoverable Tcp connection with ByteString of moderate size (~250k)

2015-07-06 Thread Sam Halliday
Dear all,

I had a few questions for this list last week regarding an unrecoverable 
error condition that I was seeing in Wandoulabs WebSockets.

I have now isolated the problem to standard Scala 2.11.7 / Akka 2.3.11 
Tcp.Write with the minimal example at the bottom of this email.

It seems that sending a ByteString of a moderate size can basically nuke 
the network connection.


I am very concerned that such unrecoverable errors are possible 
(reconnecting would potentially allow sending the failed message, but let's 
not consider that a solution).

What is even more concerning is that I have seen related problems in my 
integration tests, where I am using Acking with backpressure everywhere, 
but I have been unable to get a reliable reproduction of the problem. Using 
Acking seems to mitigate the problem somewhat, but obviously not enough.

Can somebody please have a look at this and let me know if it is a bug or 
if there is some part of the Tcp.Write spec that I failed to grok. Also, 
confirming if the problem exists on some network other than mine would be a 
good data point. My corporate environment uses PEAP


Best regards,
Sam


package testing

import akka.actor._
import akka.event.LoggingReceive
import akka.io.{ IO, Tcp }
import akka.util.ByteString
import java.net.InetSocketAddress
import java.util.UUID
import concurrent.duration._

/**
 * This is a test of Akka IO to see if the WebSocket behaviour
 * described in Buggy is a TCP problem or limited to the WebSocket
 * implementation.
 *
 * Run a blackhole on the target machine, e.g.
 *
 *   nc -k -l  >/dev/null
 *
 * For a single session, to confirm transmission of payloads:
 *
 *   nc -l  > blackhole
 *
 * run-main testing.BuggyTcp
 *
 */
object BuggyTcp extends App {
  implicit val system = ActorSystem()

  val remote = new InetSocketAddress("remote-hostname-here", )

  system.actorOf(Props(classOf[BuggyTcp], remote), "client")
}

class BuggyTcp(remote: InetSocketAddress) extends Actor with ActorLogging {

  import Tcp._
  import context.system

  override def preStart(): Unit = {
IO(Tcp) ! Connect(remote)
  }

  var connection: ActorRef = _

  object Ack extends Tcp.Event with spray.io.Droppable {
override def toString = "Ack"
  }

  def receive = {
case CommandFailed(write@Tcp.Write(bytes, ack)) =>
  log.error(s"failed to write ${ack}")

  // perpetually retry ... does it ever correct itself?
  import context.dispatcher
  context.system.scheduler.scheduleOnce(1 second, connection, write)

case c: Connected =>
  connection = sender()
  connection ! Register(self)
  log.info("sending")

  // works
  //connection ! Tcp.Write(ByteString("A" * 3), NoAck("Big thing"))
  //connection ! Tcp.Write(ByteString(UUID.randomUUID.toString), Ack)

  // never recovers
  connection ! Tcp.Write(ByteString("A" * 30), NoAck("Big thing"))
  connection ! Tcp.Write(ByteString(UUID.randomUUID.toString), Ack)

case Ack =>
  system.shutdown()
case _: ConnectionClosed =>
  system.shutdown()

case msg =>
  // WORKAROUND https://github.com/akka/akka/issues/17898
  // (can't use LoggingReceive)
  log.info(s"got a ${msg.getClass.getName}")

  }

}

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: CommandFailed, bad state, can't recover

2015-07-03 Thread Sam Halliday
This is the fuller example: you need to edit the hostname and port:

  
https://github.com/fommil/simple-spray-websockets/blob/master/src/test/scala/testing/Buggy.scala#L78

On Friday, 3 July 2015 14:25:42 UTC+1, Sam Halliday wrote:
>
> I should point out that this example is just a way that I can reliably 
> recreate the problem on my desktop. I have seen the same symptoms in a 
> situation where I was expecting Acks on all messages, and not sending until 
> the last message had cleared. Wandoulabs may have been sending ping/pong 
> responses without asking for Acks, but they are nowhere near as large as 
> these messages.
>
> On Friday, 3 July 2015 14:18:11 UTC+1, Sam Halliday wrote:
>>
>> Hi all,
>>
>> I'm using akka-io (and spray-io) with wandoulabs WebSockets to send 
>> messages across a network.
>>
>> If I send a message of this size, without expecting an Ack:
>>
>>connection ! Tcp.Write(FrameRender(TextFrame("A" * 36040033)))
>>
>> and then send a message (any size), expecting an Ack:
>>
>> connection ! Tcp.Write(FrameRender(TextFrame("hello world")), Ack)
>>
>> then I get called back with a CommandFailed (containing the 
>> Tcp.Write(ByteString(...))).
>>
>> That's fine, I can understand that the writes sometimes fail and I need 
>> to retry... but the problem is that no matter how many times I resend the 
>> "hello world" frame expecting an Ack, it will always fail. The connection 
>> has gotten itself into an extremely bad place.
>>
>> If I reduce the size of the non-acked frame by even one byte, then 
>> everything seems stable. I'm guessing this is the boundary of some buffer 
>> somewhere.
>>
>> Does anyone have any suggestions about what could be going on and how I 
>> can recover when I get the CommandFailed?
>>
>> It is *extremely* difficult to debug this thanks to 
>> https://github.com/akka/akka/issues/17898 ... every time I even think 
>> about turning on logging for some of the core actors they are spitting out 
>> to toString representation of ByteStrings, which basically blows up all my 
>> tooling.
>>
>>
>> Best regards,
>> Sam
>>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: CommandFailed, bad state, can't recover

2015-07-03 Thread Sam Halliday
I should point out that this example is just a way that I can reliably 
recreate the problem on my desktop. I have seen the same symptoms in a 
situation where I was expecting Acks on all messages, and not sending until 
the last message had cleared. Wandoulabs may have been sending ping/pong 
responses without asking for Acks, but they are nowhere near as large as 
these messages.

On Friday, 3 July 2015 14:18:11 UTC+1, Sam Halliday wrote:
>
> Hi all,
>
> I'm using akka-io (and spray-io) with wandoulabs WebSockets to send 
> messages across a network.
>
> If I send a message of this size, without expecting an Ack:
>
>connection ! Tcp.Write(FrameRender(TextFrame("A" * 36040033)))
>
> and then send a message (any size), expecting an Ack:
>
> connection ! Tcp.Write(FrameRender(TextFrame("hello world")), Ack)
>
> then I get called back with a CommandFailed (containing the 
> Tcp.Write(ByteString(...))).
>
> That's fine, I can understand that the writes sometimes fail and I need to 
> retry... but the problem is that no matter how many times I resend the 
> "hello world" frame expecting an Ack, it will always fail. The connection 
> has gotten itself into an extremely bad place.
>
> If I reduce the size of the non-acked frame by even one byte, then 
> everything seems stable. I'm guessing this is the boundary of some buffer 
> somewhere.
>
> Does anyone have any suggestions about what could be going on and how I 
> can recover when I get the CommandFailed?
>
> It is *extremely* difficult to debug this thanks to 
> https://github.com/akka/akka/issues/17898 ... every time I even think 
> about turning on logging for some of the core actors they are spitting out 
> to toString representation of ByteStrings, which basically blows up all my 
> tooling.
>
>
> Best regards,
> Sam
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] CommandFailed, bad state, can't recover

2015-07-03 Thread Sam Halliday
Hi all,

I'm using akka-io (and spray-io) with wandoulabs WebSockets to send 
messages across a network.

If I send a message of this size, without expecting an Ack:

   connection ! Tcp.Write(FrameRender(TextFrame("A" * 36040033)))

and then send a message (any size), expecting an Ack:

connection ! Tcp.Write(FrameRender(TextFrame("hello world")), Ack)

then I get called back with a CommandFailed (containing the 
Tcp.Write(ByteString(...))).

That's fine, I can understand that the writes sometimes fail and I need to 
retry... but the problem is that no matter how many times I resend the 
"hello world" frame expecting an Ack, it will always fail. The connection 
has gotten itself into an extremely bad place.

If I reduce the size of the non-acked frame by even one byte, then 
everything seems stable. I'm guessing this is the boundary of some buffer 
somewhere.

Does anyone have any suggestions about what could be going on and how I can 
recover when I get the CommandFailed?

It is *extremely* difficult to debug this thanks 
to https://github.com/akka/akka/issues/17898 ... every time I even think 
about turning on logging for some of the core actors they are spitting out 
to toString representation of ByteStrings, which basically blows up all my 
tooling.


Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] logback turboFilter and akka LoggingReceive incompatible?

2015-06-22 Thread Sam Halliday
No problem, raised as https://github.com/akka/akka/issues/17801

On Monday, 22 June 2015 13:23:02 UTC+1, Patrik Nordwall wrote:
>
> ok
> Sam, please open an issue.
> /Patrik
>
> On Mon, Jun 22, 2015 at 1:57 PM, Roland Kuhn  > wrote:
>
>> Ah, it seems that I misunderstood: this filter is not part of the 
>> pre-filtering that is done before even sending to the logger actor. Then 
>> disregard my comment about saving some cycles, within the logger actor it 
>> is obviously too late for that anyway.
>>
>> Patrik, your comment brought back a dim recollection of using "{}" to 
>> safe-guard against "{}" occurring within the message itself, but I just 
>> checked the sources of SLF4J and that case does not (no longer?) lead to an 
>> error condition, so we can simply remove that first argument AFAICS.
>>
>> Regards,
>>
>> Roland
>>
>> 22 jun 2015 kl. 13:03 skrev Patrik Nordwall > >:
>>
>> One problem could be if the string contains placeholders {}.
>>
>> I would guess that the reason why we used debug("{}", message) was to 
>> avoid the toString of the message if the level is not enabled. Checking the 
>> level as you suggest might be enough.
>>
>> When using LoggingAdapter we materialize it to a string at call site.
>>
>> /Patrik
>>
>> On Mon, Jun 22, 2015 at 12:40 PM, Sam Halliday > > wrote:
>>
>>> On Monday, 22 June 2015 11:30:30 UTC+1, √ wrote:
>>>>
>>>> Why deduplicate non-logged events at all?
>>>>
>>>> Doing the substitution but not incurring the IO cost must be worth it?
>>>>
>>>>
>>> Turbo Filters are a well defined concept in Logback land: 
>>> http://logback.qos.ch/manual/filters.html#TurboFilter
>>>
>>> The only logic is to add a string to an LRUCache, so I don't believe 
>>> there is a great overhead here in the typical case.
>>>
>>> With my workaround, logger.debug isn't even called if the debugging is 
>>> not enabled for that class/source.
>>>
>>> Unfortunately, when logged through the LoggingAdapter, we lose the 
>>> ability to dedupe based on log template and it is instead performed on the 
>>> full String. I can live with that as a caveat.
>>>
>>>  
>>>
>>>> -- 
>>>> Cheers,
>>>> √
>>>> On 22 Jun 2015 11:09, "Roland Kuhn"  wrote:
>>>>
>>>>> It depends on what you want to achieve: not doing the substitution for 
>>>>> non-logged events is an explicit goal in our infrastructure.
>>>>>
>>>>> Regards,
>>>>>
>>>>> Roland
>>>>>
>>>>> 22 jun 2015 kl. 11:02 skrev Viktor Klang :
>>>>>
>>>>> Doing the filtering pre-substitution seems like a bug.
>>>>>
>>>>> -- 
>>>>> Cheers,
>>>>> √
>>>>> On 22 Jun 2015 01:54, "Sam Halliday"  wrote:
>>>>>
>>>>>> Patrik,
>>>>>>
>>>>>> Thanks for investigating! You saved me a few hours off my Monday as I
>>>>>> was going to go through this in detail and put together a minimal test
>>>>>> case :-)
>>>>>>
>>>>>> Unfortunately, your conclusion seems to be pretty damning. It might 
>>>>>> just
>>>>>> be this one turbofilter that is incompatible with akka, but its likely
>>>>>> there are more.
>>>>>>
>>>>>> Can you think of any workarounds, other that doing duplicate filtering
>>>>>> in the application tier? (eek!) I could investigate writing my own
>>>>>> turbofilter that handles the {} case... performance is not a major
>>>>>> concern here as the consequences of not duplicate filtering is more
>>>>>> far more significant than rendering log messages twice.
>>>>>>
>>>>>> Best regards,
>>>>>> Sam
>>>>>>
>>>>>>
>>>>>> --
>>>>>> >>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>> >>>>>>>>>>  Check the FAQ: 
>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>> >>>>>>>>>>  Search the archives: 
>>>>>> https://

Re: [akka-user] logback turboFilter and akka LoggingReceive incompatible?

2015-06-22 Thread Sam Halliday
On Monday, 22 June 2015 11:30:30 UTC+1, √ wrote:
>
> Why deduplicate non-logged events at all?
>
> Doing the substitution but not incurring the IO cost must be worth it?
>
>
Turbo Filters are a well defined concept in Logback land: 
http://logback.qos.ch/manual/filters.html#TurboFilter

The only logic is to add a string to an LRUCache, so I don't believe there 
is a great overhead here in the typical case.

With my workaround, logger.debug isn't even called if the debugging is not 
enabled for that class/source.

Unfortunately, when logged through the LoggingAdapter, we lose the ability 
to dedupe based on log template and it is instead performed on the full 
String. I can live with that as a caveat.

 

> -- 
> Cheers,
> √
> On 22 Jun 2015 11:09, "Roland Kuhn" > 
> wrote:
>
>> It depends on what you want to achieve: not doing the substitution for 
>> non-logged events is an explicit goal in our infrastructure.
>>
>> Regards,
>>
>> Roland
>>
>> 22 jun 2015 kl. 11:02 skrev Viktor Klang > >:
>>
>> Doing the filtering pre-substitution seems like a bug.
>>
>> -- 
>> Cheers,
>> √
>> On 22 Jun 2015 01:54, "Sam Halliday" > 
>> wrote:
>>
>>> Patrik,
>>>
>>> Thanks for investigating! You saved me a few hours off my Monday as I
>>> was going to go through this in detail and put together a minimal test
>>> case :-)
>>>
>>> Unfortunately, your conclusion seems to be pretty damning. It might just
>>> be this one turbofilter that is incompatible with akka, but its likely
>>> there are more.
>>>
>>> Can you think of any workarounds, other that doing duplicate filtering
>>> in the application tier? (eek!) I could investigate writing my own
>>> turbofilter that handles the {} case... performance is not a major
>>> concern here as the consequences of not duplicate filtering is more
>>> far more significant than rendering log messages twice.
>>>
>>> Best regards,
>>> Sam
>>>
>>>
>>> --
>>> >>>>>>>>>>  Read the docs: http://akka.io/docs/
>>> >>>>>>>>>>  Check the FAQ: 
>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>> >>>>>>>>>>  Search the archives: 
>>> https://groups.google.com/group/akka-user
>>> ---
>>> You received this message because you are subscribed to the Google 
>>> Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to akka-user+...@googlegroups.com .
>>> To post to this group, send email to akka...@googlegroups.com 
>>> .
>>> Visit this group at http://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>> Patrik Nordwall > writes:
>>>
>>> > On Sun, Jun 21, 2015 at 4:04 PM, Sam Halliday >> >
>>> > wrote:
>>> >
>>> >> Everything is DEBUG, and this is akka 2.3.11.
>>> >>
>>> >
>>> > ok
>>> >
>>> > We log these (and all other things) with
>>> > Logger(logClass, logSource).debug("{}", message.asInstanceOf[AnyRef])
>>> >
>>> > "{}" is the part that the duplicate filter will use to identify the log
>>> > message, i.e. all log messages are identical :(
>>> >
>>> > This is not something that we have changed.
>>> >
>>> > /Patrik
>>> >
>>> >
>>> >>
>>> >>
>>> >> On Sunday, 21 June 2015 15:03:48 UTC+1, Patrik Nordwall wrote:
>>> >>>
>>> >>> What akka.loglevel are you using?
>>> >>>
>>> >>> I guess that you see the changed behavior in 2.4-M1 compared to 
>>> 2.3.11.
>>> >>>
>>> >>> In 2.4 we have added a check in LoggingReceive that it will only be
>>> >>> active if the loglevel is DEBUG.
>>> >>>
>>> >>> if (context.system.eventStream.logLevel >= Logging.DebugLevel) {
>>> >>>
>>> >>> /Patrik
>>> >>>
>>> >>>
>>> >>> On Thu, Jun 18, 2015 at 11:01 PM, Sam Halliday 
>>> >>> wrote:
>>> >>>
>>> >>>> Hi all,
>>> >>>>
>>> &

Re: [akka-user] logback turboFilter and akka LoggingReceive incompatible?

2015-06-22 Thread Sam Halliday
Does anyone see any major problems with this workaround? If not, I 
recommend it as a patch to the mainline.

package akka.event.slf4j

import akka.actor._
import akka.event.Logging._

/**
 * Stock Slf4jLogger actually logs everything as "{}" with a
 * parameter, which is incompatible with much of the logback
 * machinary. See
 * https://groups.google.com/d/msg/akka-user/YVri58taWsM/X6-XR0_i1nwJ
 * for a discussion.
 */
class FixedSlf4jLogger extends Slf4jLogger {
  override def receive = {
case event @ Error(cause, logSource, logClass, message) =>
  withMdc(logSource, event) {
val logger = Logger(logClass, logSource)
if (logger.isErrorEnabled()) {
  cause match {
case Error.NoCause | null => logger.error(if (message != null) 
message.toString else null)
case _ => logger.error(if (message != null) message.toString 
else cause.getLocalizedMessage, cause)
  }
}
  }

case event @ Warning(logSource, logClass, message) =>
  withMdc(logSource, event) {
val logger = Logger(logClass, logSource)
if (logger.isWarnEnabled()) {
  logger.warn(message.toString)
}
  }

case event @ Info(logSource, logClass, message) =>
  withMdc(logSource, event) {
val logger = Logger(logClass, logSource)
if (logger.isInfoEnabled()) {
  logger.info(message.toString)
}
  }

case event @ Debug(logSource, logClass, message) =>
  withMdc(logSource, event) {
val logger = Logger(logClass, logSource)
if (logger.isDebugEnabled()) {
  logger.debug(message.toString)
}
  }

case InitializeLogger(_) =>
  sender() ! LoggerInitialized
  }
}



On Monday, 22 June 2015 10:33:20 UTC+1, Sam Halliday wrote:
>
> On Monday, 22 June 2015 10:02:06 UTC+1, √ wrote:
>>
>> Doing the filtering pre-substitution seems like a bug.
>>
>
> If you're suggesting that there is a bug in the filter, then that is not 
> true: it's a well documented, incredibly useful, feature: 
> http://logback.qos.ch/manual/filters.html#DuplicateMessageFilter
>
>   "Note that in case of parameterized logging, only the raw message is 
> taken into consideration."
>
> The problem is in Akka's SLF4J because *everything* the user logs is being 
> converted into `{}` with parameters, even if the user actually logged a 
> fully generated String. This means it's impossible to use this (very useful 
> filter) in a meaningful way with Akka and de-duplication of logging 
> messages must be performed in the application tier (yuck!).
>
> The only practical workaround I can see to fix this today in my codebase 
> is to reimplement Slf4jLogger.scala... but suggestions welcome.
>
> Incidentally, I don't see why you don't just create the log message in 
> Slf4jLogger because at this point we're running in the log actor and not in 
> the thread / actor that initiated the log message. If you really want to 
> delay logging, can't you just ask SLF4J "is debugging enabled for this 
> source/class?"?
>
> Best regards,
> Sam
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] logback turboFilter and akka LoggingReceive incompatible?

2015-06-22 Thread Sam Halliday
On Monday, 22 June 2015 10:02:06 UTC+1, √ wrote:
>
> Doing the filtering pre-substitution seems like a bug.
>

If you're suggesting that there is a bug in the filter, then that is not 
true: it's a well documented, incredibly useful, feature: 
http://logback.qos.ch/manual/filters.html#DuplicateMessageFilter

  "Note that in case of parameterized logging, only the raw message is 
taken into consideration."

The problem is in Akka's SLF4J because *everything* the user logs is being 
converted into `{}` with parameters, even if the user actually logged a 
fully generated String. This means it's impossible to use this (very useful 
filter) in a meaningful way with Akka and de-duplication of logging 
messages must be performed in the application tier (yuck!).

The only practical workaround I can see to fix this today in my codebase is 
to reimplement Slf4jLogger.scala... but suggestions welcome.

Incidentally, I don't see why you don't just create the log message in 
Slf4jLogger because at this point we're running in the log actor and not in 
the thread / actor that initiated the log message. If you really want to 
delay logging, can't you just ask SLF4J "is debugging enabled for this 
source/class?"?

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] logback turboFilter and akka LoggingReceive incompatible?

2015-06-21 Thread Sam Halliday
Patrik,

Thanks for investigating! You saved me a few hours off my Monday as I
was going to go through this in detail and put together a minimal test
case :-)

Unfortunately, your conclusion seems to be pretty damning. It might just
be this one turbofilter that is incompatible with akka, but its likely
there are more.

Can you think of any workarounds, other that doing duplicate filtering
in the application tier? (eek!) I could investigate writing my own
turbofilter that handles the {} case... performance is not a major
concern here as the consequences of not duplicate filtering is more
far more significant than rendering log messages twice.

Best regards,
Sam


-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.
Patrik Nordwall  writes:

> On Sun, Jun 21, 2015 at 4:04 PM, Sam Halliday 
> wrote:
>
>> Everything is DEBUG, and this is akka 2.3.11.
>>
>
> ok
>
> We log these (and all other things) with
> Logger(logClass, logSource).debug("{}", message.asInstanceOf[AnyRef])
>
> "{}" is the part that the duplicate filter will use to identify the log
> message, i.e. all log messages are identical :(
>
> This is not something that we have changed.
>
> /Patrik
>
>
>>
>>
>> On Sunday, 21 June 2015 15:03:48 UTC+1, Patrik Nordwall wrote:
>>>
>>> What akka.loglevel are you using?
>>>
>>> I guess that you see the changed behavior in 2.4-M1 compared to 2.3.11.
>>>
>>> In 2.4 we have added a check in LoggingReceive that it will only be
>>> active if the loglevel is DEBUG.
>>>
>>> if (context.system.eventStream.logLevel >= Logging.DebugLevel) {
>>>
>>> /Patrik
>>>
>>>
>>> On Thu, Jun 18, 2015 at 11:01 PM, Sam Halliday 
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I'm seeing something very weird.
>>>>
>>>> When I enable the logback turboFilter DuplicateMessageFilter:
>>>>
>>>>   http://logback.qos.ch/manual/filters.html#DuplicateMessageFilter
>>>>
>>>>
>>>> https://github.com/qos-ch/logback/blob/master/logback-classic/src/main/java/ch/qos/logback/classic/turbo/DuplicateMessageFilter.java
>>>>
>>>> all my LoggingReceive messages go silent.
>>>>
>>>> Uncommenting the one line in my config that enables the filter is the
>>>> only change and introduces the regression. Latest scala, latest akka,
>>>> latest logback.
>>>>
>>>> I can workaround it at the moment, but this is extremely concerning
>>>> because I have used the duplicate filter in chatty production systems in
>>>> the past and to see an incompatibility between two stable libraries is
>>>> never good.
>>>>
>>>> Turning on logback debug (when it reads its own config) is looking good,
>>>> no problems reported.
>>>>
>>>> Best regards,
>>>> Sam
>>>>
>>>> --
>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>> >>>>>>>>>> Check the FAQ:
>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>> >>>>>>>>>> Search the archives:
>>>> https://groups.google.com/group/akka-user
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Akka User List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to akka-user+...@googlegroups.com.
>>>> To post to this group, send email to akka...@googlegroups.com.
>>>> Visit this group at http://groups.google.com/group/akka-user.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>
>>>
>>> --
>>>
>>

Re: [akka-user] logback turboFilter and akka LoggingReceive incompatible?

2015-06-21 Thread Sam Halliday
Everything is DEBUG, and this is akka 2.3.11.

On Sunday, 21 June 2015 15:03:48 UTC+1, Patrik Nordwall wrote:
>
> What akka.loglevel are you using?
>
> I guess that you see the changed behavior in 2.4-M1 compared to 2.3.11.
>
> In 2.4 we have added a check in LoggingReceive that it will only be active 
> if the loglevel is DEBUG.
>
> if (context.system.eventStream.logLevel >= Logging.DebugLevel) {
>
> /Patrik
>
>
> On Thu, Jun 18, 2015 at 11:01 PM, Sam Halliday  > wrote:
>
>> Hi all,
>>
>> I'm seeing something very weird.
>>
>> When I enable the logback turboFilter DuplicateMessageFilter:
>>
>>   http://logback.qos.ch/manual/filters.html#DuplicateMessageFilter
>>
>>   
>> https://github.com/qos-ch/logback/blob/master/logback-classic/src/main/java/ch/qos/logback/classic/turbo/DuplicateMessageFilter.java
>>
>> all my LoggingReceive messages go silent.
>>
>> Uncommenting the one line in my config that enables the filter is the 
>> only change and introduces the regression. Latest scala, latest akka, 
>> latest logback.
>>
>> I can workaround it at the moment, but this is extremely concerning 
>> because I have used the duplicate filter in chatty production systems in 
>> the past and to see an incompatibility between two stable libraries is 
>> never good. 
>>
>> Turning on logback debug (when it reads its own config) is looking good, 
>> no problems reported.
>>
>> Best regards,
>> Sam
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
>
>  

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] logback turboFilter and akka LoggingReceive incompatible?

2015-06-18 Thread Sam Halliday
Hi all,

I'm seeing something very weird.

When I enable the logback turboFilter DuplicateMessageFilter:

  http://logback.qos.ch/manual/filters.html#DuplicateMessageFilter

  
https://github.com/qos-ch/logback/blob/master/logback-classic/src/main/java/ch/qos/logback/classic/turbo/DuplicateMessageFilter.java

all my LoggingReceive messages go silent.

Uncommenting the one line in my config that enables the filter is the only 
change and introduces the regression. Latest scala, latest akka, latest 
logback.

I can workaround it at the moment, but this is extremely concerning because 
I have used the duplicate filter in chatty production systems in the past 
and to see an incompatibility between two stable libraries is never good. 

Turning on logback debug (when it reads its own config) is looking good, no 
problems reported.

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Caching akka futures - is it ok?

2015-05-28 Thread Sam Halliday
A Future is definitely mutable (it will hopefully complete), but you cannot 
mutate it.

You'd be best caching the result of the Future, rather than the future 
itself.

On Wednesday, 27 May 2015 21:48:52 UTC+1, Moiz Raja wrote:
>
> I have a piece of code where I make a request to an actor using ask and 
> cache the resulting future if that ask was successful. I am seeing problems 
> in this area where when I try to access that "completed" future later on I 
> find (a) that it does not contain a successful future and (b) the resulting 
> timeout exception has a completely unexpected timeout.
>
> This makes me wonder if the ask future that is being returned is something 
> we can cache. Is the future mutable per chance?
>
> Any advise on whether caching is ok or not would be helpful. 
>
> Thanks,
> -Moiz
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Migrating from wandoulabs websockets to stock akka-io

2015-05-27 Thread Sam Halliday
On Wednesday, 27 May 2015 14:01:46 UTC+1, rkuhn wrote:
>
> Maybe I misunderstand you still, but ConnectionManager is not something 
> that we have in Akka HTTP
>

I am referring to the `manager` field in an implementation of 
`akka.io.IO.Extension`. The limitation that I'm referring to lies with 
spray-can, in that it cannot handle the WebSockets Upgrade request... 
presumably akka-http bypasses this because your HTTP manager is more 
advanced and you are not using spray-can in the first instance.

I should very much like to have access to the new HTTP manager so that I 
can bypass the Streams API for legacy integration with Actors. However, it 
would appear that you're saying there is no way to use akka-io for HTTP 
communication in this way, and that is a great shame. The spray-can/http 
integration with the actor frameworks is fantastic, and the loss of this 
integration will be sorely missed when it is deprecated.
 

> Just because something is a Flow does not make any promises ...
>
>
> This is not true: a Flow has one open input and one open output port and 
> all data elements that flow through these ports will do so governed by the 
> Reactive Streams back-pressure semantics. This means that the Flow has the 
> ability to slow down the Source that is connected to it and it also reacts 
> to slow-down requests from the Sink that it will be connected to.
>

This means nothing if the Source doesn't backpressure properly or the Sink 
just acks everything. I don't really care about what happens in one 
component, I care about the entire system. In your new websockets API, the 
user provides the implementation of the Flow... what I care about is that 
the framework behaves correctly in their Source and Sink.

In particular, I'd like to have confirmation that network packets are only 
read from the network when the Source is told to progress and that the 
backpressure callback to the user-provided Flow is only invoked when 
akka-io has confirmed that it has written the last message to the the 
network (i.e. the akka-io Ack). Details of these points are *absolutely 
essential* to the management of the heap and I do not want to simply assume 
that it is true.


With regards to the rest of your comments and suggestions, thanks for that. 
I shall study it further if I have time to undertake writing a wrapper 
layer. In the short-term, it looks like the barrier to entry is far too 
high without a convenient Stream/Actor bridge in place, so I will be 
sticking with wandoulabs' and smootoo's convenience classes.


Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Migrating from wandoulabs websockets to stock akka-io

2015-05-27 Thread Sam Halliday
Hi Roland,

On Wednesday, 27 May 2015 08:18:09 UTC+1, rkuhn wrote:
> For this venture you need no HTTP documentation

Actually I'd like to go futher than akka-http and see documentation that
allows me to use WebSockets with akka-io directly, bypassing the
Streaming API.

The historic limitation has always been that the HTTP
ConnectionManager of akka-io was unable to handle the upgrade
request. Wandoulabs use their own actor to handle HTTP and UHTTP,
but it has annoying side effects. I'm interested to know how
akka-http has managed to implement WebSockets with that
constraint in place (or if that constraint has been lifted).


> The statement “there are no promises around back-pressure”
> indicates that you did not, in fact, understand the full extent
> of what Akka Streams are.

Just because something is a Flow does not make any promises about
how back pressure is actually implemented in that flow: you have
even pointed out how to create "open hoses", or just blow up
internal buffers when downstream doesn't consume fast enough.

I'd like to know if the underlying Source of incoming client
messages on a websocket endpoint will respect the backpressure
from the `handleWebsocketMessages: Flow` that is dealing with
it (i.e. not read from the network unless
`handleWebsocketMessages` is pulling), and conversely if the
underlying Sink back to the client is going to backpressure using
akka-io's Ack mechanism so that `handleWebsocketMessages` will
only be pulled when akka-io gives the green lights.


> We’d love to improve the documentation in this regard, but we’d
> need to first figure out where their deficiency is, hence I’m
> talking with you.

I think clarity on the above two points would be a good start. In
addition, and more generally, integration between "hoses"
and "hammers" is extremely important --- unless you intentionally
want to limit akka-streams uptake to green field projects only.
The project I work on has 1.2 million lines of Scala code with
legacy components dating from Scala 2.6. There isn't a snowball's
chance in hell of rewriting it to use akka-streams.


>> What I'm missing is the ability to hook an existing actor
>> system into something that expects a Flow, with back pressure
>> preserved.
>
> As I hopefully explained in my other mail about hoses: your
> Actors would need to implement the full spec, they’d need to be
> watertight.

This is a start. Is there a test framework that can be used to
stress test the implementation of a Stream / Actor bridge?


> How we implement Flows internally should be of no consequence

On the contrary, I feel the implementation is of huge
significance. Firstly, it helps to understand the expected
performance, and second it is critical when writing
integration code. It is extremely bizarre that both projects
should be released under the akka banner, yet be so siloed.


> In order to solve this particular problem you’ll need to
> carefully describe the back-pressure protocol spoken by your
> Actor.

The Actor is expecting to send messages directly to akka-io and
speaks the akka-io Ack protocol using a simple object as the Ack
message. It expects an Ack before it will send a message
upstream (and it greedily consumes data, but that could easily be
changed).

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Migrating from wandoulabs websockets to stock akka-io

2015-05-26 Thread Sam Halliday
Hi Roland,

I've read the documentation, several times, I've even given you feedback on
the documentation in an earlier milestone phase. Also, the documentation
for WebSockets in Akka is TODO and TODO. The documentation on the routing
directives are extremely sparse. In particular, there are no promises
around the implementation of back pressure from the new websockets.

What I'm missing is the ability to hook an existing actor system into
something that expects a Flow, with back pressure preserved. I understand
Flow, but I don't understand the implementation in terms of Actors (which
incidentally, is exactly my primary feedback on the earlier documentation).
You're now confusing me further by saying that Streams are not actors,
because I was told at the time that streams are implemented in terms of
actors.

In case you didn't pick up on it, I'm planning on moving away from
wandoulabs, not integrate it. This is the key piece, distilled into a
standalone problem.

Best regards, Sam
On 27 May 2015 7:35 am, "Roland Kuhn"  wrote:

> Hi Sam,
>
> it might be better to take a step back before potentially running in the
> wrong direction. First off, Akka HTTP offers a complete solution for
> everything HTTP (including websockets) within an ActorSystem. Before
> deciding to combine this with another tool I recommend that you explore
> first how Akka HTTP works, because it introduces several fundamentally new
> concepts. In particular, when talking about it as “Spray 2.0” it is
> important to note that everything ActorRef-related in Spray has been
> replaced by Streams—a completely different abstraction that is *not* an
> Actor. The whole underpinnings are completely rewritten in a radically
> different fashion, so don’t expect any Spray modules that live “beneath the
> surface” to seamlessly fit onto Akka HTTP.
>
> We could go into the details Wandoulabs’ websocket add-on, but I don’t see
> much value in discussing that before the basics are clear. The other piece
> of information that I’m lacking is why you would want to “retrofit”
> something in this context, it might be better to explain the ends and not
> the means in order to get help.
>
> Regards,
>
> Roland
>
> 23 maj 2015 kl. 12:38 skrev Sam Halliday :
>
> Hi all,
>
> I'm very excited that akka-io now has WebSocket support.
>
> In ENSIME, we're planning on using this wrapper over wandoulab's websockets
>
>   https://github.com/smootoo/simple-spray-websockets
>
> to easily create a REST/WebSockets endpoint with JSON marshalling for a
> sealed family, with backpressure.
>
> Smootoo's wrapper works really well, and I have had the pleasure of using
> it in a corporate environment so I trust it to be stable.
>
>
> For future proofing, it would seem sensible to move to stock akka-io for
> WebSockets, so I'm considering sending a PR to retrofit the wrapper. I have
> a couple of questions about that:
>
> 1. does akka-io's HTTP singleton actor support WebSockets now? That was
> the big caveat about using wandoulabs. It means all kinds of workarounds if
> you want to just use HTTP in the same actor system.
>
> 2. is there a migration guide for wandoulabs to akka-io? Or would it be
> best just to rewrite the wrapper from scratch on top of akka-io?
>
> 3. where is the documentation? This just has a big TODO on it
>
>
> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/websocket-support.html
>
> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/routing-dsl/websocket-support.html
>
> I can't even find any examples. I guess the key thing is the handshaking,
> which would mean rewriting this bit (and the corresponding client side
> handshake)
>
>
> https://github.com/smootoo/simple-spray-websockets/blob/master/src/main/scala/org/suecarter/websocket/WebSocket.scala#L167
>
> Best regards,
> Sam
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> *Dr. Roland Kuhn*
> *Akka Tech Lead

[akka-user] state of backpressure in websockets?

2015-05-26 Thread Sam Halliday
Hi all,

I have another thread about retrofitting wandoulabs websockets to use 
akka-io, which is proving painful, but I wanted to separate out this aspect 
of the questioning.

Before I invest any more time into it, I'd like to know if the new 
websockets implementation actually implements backpressure on the server 
and client side, for both reading and writing from the socket (there are 
four channels requiring backpressure in a single client/server connection).


Even if the implementation has backpressure at the IO level, it looks like 
the only way to create a Flow from an Actor is via Sink.actorRef (plus some 
other magic with Sources and the DSL that I haven't figured out yet) ... 
and that explicitly says in the documentation

   "there is no back-pressure signal from the destination actor, i.e. if 
the actor is not consuming the messages fast enough the mailbox of the 
actor will grow"

which means that passing off to an actor backend to implement the 
websockets server is ultimately not going to have any backpressure when 
reading off the socket.

I don't know what the situation is for writing to the socket, but certainly 
this is something that my current backend library is able to handle.


So is this reactive, or what?


Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Flow from an Actor

2015-05-26 Thread Sam Halliday
To maybe try and formalise this a little bit more, and abstract away from 
WebSockets (that will only muddy the water).

Lets say we have an Actor already that looks like this

sealed trait Incoming
sealed trait Outgoing
class SimpleActor(upstream: ActorRef) extends Actor {
  def receive = {
case in: Incoming =>
   // work, including some upstream ! outgoing
case Ack =>
   // ack for the last message upstream
case other =>
   // work, including some upstream ! outgoing
  }
}

How do I wrap that as a Flow[Incoming, Outgoing, Unit] ?


On Tuesday, 26 May 2015 14:09:45 UTC+1, Sam Halliday wrote:
>
> Re: asyncyMap. I don't think that is going to work, there is no implied 
> single response to each query (which I gather is what you're suggesting)? 
> And I need some way of receiving new messages from upstream.
>
> The existing Actor is both a sink (i.e. it consumes messages from 
> upstream, not necessarily responding to each one) and a source (i.e. it can 
> send an effectively infinite number of messages). It is using backpressure, 
> but only using its own `Ack` message.
>
> For some context, I'm retrofitting some code that is using this WebSockets 
> layer around wandoulabs, e.g. 
> https://github.com/smootoo/simple-spray-websockets/blob/master/src/test/scala/org/suecarter/websocket/WebSocketSpec.scala#L150
>
> But the newly released akka-io layer expects a Flow.
>
> The `Ack` is being received when messages were sent directly to the I/O 
> layer. Presumably, the backpressure is implemented differently now... 
> although I am not sure how yet. That's the second problem once I can 
> actually get everything hooked up.
>
>
> On Tuesday, 26 May 2015 13:57:43 UTC+1, √ wrote:
>>
>> Not knowing what your actor is trying to do, what about Flow.mapAsync + 
>> ask?
>>
>> -- 
>> Cheers,
>> √
>> On 26 May 2015 14:54, "Sam Halliday"  wrote:
>>
>>> Hi all,
>>>
>>> I need to interface an Actor with an API that requires a Flow.
>>>
>>> The actor can receive a sealed trait family of inputs and will only send 
>>> (a different) sealed family of outputs to upstream, so I suspect that will 
>>> help matters.
>>>
>>> Looking in FlowOps, it looks like I can create a Flow from a partial 
>>> function, but there isn't anything that would just simply take an ActorRef.
>>>
>>> Am I missing something trivial to just upgrade an ActoRef to a Flow? 
>>> (Obviously there is a bunch of extra messages the actor will have to 
>>> handle, such as backpressure messages etc... but assume that's all taken 
>>> care of)
>>>
>>> Best regards,
>>> Sam
>>>
>>> -- 
>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>> >>>>>>>>>> Check the FAQ: 
>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>> >>>>>>>>>> Search the archives: 
>>> https://groups.google.com/group/akka-user
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to akka-user+...@googlegroups.com.
>>> To post to this group, send email to akka...@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Flow from an Actor

2015-05-26 Thread Sam Halliday
Re: asyncyMap. I don't think that is going to work, there is no implied 
single response to each query (which I gather is what you're suggesting)? 
And I need some way of receiving new messages from upstream.

The existing Actor is both a sink (i.e. it consumes messages from upstream, 
not necessarily responding to each one) and a source (i.e. it can send an 
effectively infinite number of messages). It is using backpressure, but 
only using its own `Ack` message.

For some context, I'm retrofitting some code that is using this WebSockets 
layer around wandoulabs, e.g. 
https://github.com/smootoo/simple-spray-websockets/blob/master/src/test/scala/org/suecarter/websocket/WebSocketSpec.scala#L150

But the newly released akka-io layer expects a Flow.

The `Ack` is being received when messages were sent directly to the I/O 
layer. Presumably, the backpressure is implemented differently now... 
although I am not sure how yet. That's the second problem once I can 
actually get everything hooked up.


On Tuesday, 26 May 2015 13:57:43 UTC+1, √ wrote:
>
> Not knowing what your actor is trying to do, what about Flow.mapAsync + 
> ask?
>
> -- 
> Cheers,
> √
> On 26 May 2015 14:54, "Sam Halliday" > 
> wrote:
>
>> Hi all,
>>
>> I need to interface an Actor with an API that requires a Flow.
>>
>> The actor can receive a sealed trait family of inputs and will only send 
>> (a different) sealed family of outputs to upstream, so I suspect that will 
>> help matters.
>>
>> Looking in FlowOps, it looks like I can create a Flow from a partial 
>> function, but there isn't anything that would just simply take an ActorRef.
>>
>> Am I missing something trivial to just upgrade an ActoRef to a Flow? 
>> (Obviously there is a bunch of extra messages the actor will have to 
>> handle, such as backpressure messages etc... but assume that's all taken 
>> care of)
>>
>> Best regards,
>> Sam
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Flow from an Actor

2015-05-26 Thread Sam Halliday
Hi all,

I need to interface an Actor with an API that requires a Flow.

The actor can receive a sealed trait family of inputs and will only send (a 
different) sealed family of outputs to upstream, so I suspect that will 
help matters.

Looking in FlowOps, it looks like I can create a Flow from a partial 
function, but there isn't anything that would just simply take an ActorRef.

Am I missing something trivial to just upgrade an ActoRef to a Flow? 
(Obviously there is a bunch of extra messages the actor will have to 
handle, such as backpressure messages etc... but assume that's all taken 
care of)

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Migrating from wandoulabs websockets to stock akka-io

2015-05-26 Thread Sam Halliday
Thanks Arnaud!

The key is in this line

  
https://github.com/jrudolph/akka-http-scala-js-websocket-chat/blob/239af857da2f174ea1624a84b0861c42cf4d1f2d/backend/src/main/scala/example/akkawschat/Webservice.scala#L33

so on the server side, the REST endpoint upgrades to a WebSocket `Flow` 
(i.e. akka streams API) via a directive.

My legacy code is just an Actor, so I'll have to reimplement my marshalling 
layer and so on around this (or maybe marshalling is already handled).

I've had a look at the Akka Streams documentation but it is not clear to me 
how to manually create a Flow from an Actor. I may have to read this a few 
more times, but I don't think it contains the information that I need... 
there doesn't appear to be a `Flow.actorRef`

http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/stream-integrations.html



On Saturday, 23 May 2015 18:49:26 UTC+1, Arnaud Gourlay wrote:
>
> Hi Sam,
>
> I am also really interested in migrating from Wandoulabs to the new Akka 
> WS implementation.
> So far this is the best example I could find 
> https://github.com/jrudolph/akka-http-scala-js-websocket-chat
>
> Hope this helps,
> Arnaud
>
> On Saturday, May 23, 2015 at 12:38:22 PM UTC+2, Sam Halliday wrote:
>>
>> Hi all,
>>
>> I'm very excited that akka-io now has WebSocket support.
>>
>> In ENSIME, we're planning on using this wrapper over wandoulab's 
>> websockets
>>
>>   https://github.com/smootoo/simple-spray-websockets
>>
>> to easily create a REST/WebSockets endpoint with JSON marshalling for a 
>> sealed family, with backpressure.
>>
>> Smootoo's wrapper works really well, and I have had the pleasure of using 
>> it in a corporate environment so I trust it to be stable.
>>
>>
>> For future proofing, it would seem sensible to move to stock akka-io for 
>> WebSockets, so I'm considering sending a PR to retrofit the wrapper. I have 
>> a couple of questions about that:
>>
>> 1. does akka-io's HTTP singleton actor support WebSockets now? That was 
>> the big caveat about using wandoulabs. It means all kinds of workarounds if 
>> you want to just use HTTP in the same actor system.
>>
>> 2. is there a migration guide for wandoulabs to akka-io? Or would it be 
>> best just to rewrite the wrapper from scratch on top of akka-io?
>>
>> 3. where is the documentation? This just has a big TODO on it
>>
>>   
>> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/websocket-support.html
>>   
>> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/routing-dsl/websocket-support.html
>>
>> I can't even find any examples. I guess the key thing is the handshaking, 
>> which would mean rewriting this bit (and the corresponding client side 
>> handshake)
>>
>>   
>> https://github.com/smootoo/simple-spray-websockets/blob/master/src/main/scala/org/suecarter/websocket/WebSocket.scala#L167
>>
>> Best regards,
>> Sam
>>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] filter dead letters by message

2015-05-26 Thread Sam Halliday
re-raised as a ticket, because of the CLA

https://github.com/akka/akka/issues/17572

On Tuesday, 26 May 2015 11:32:39 UTC+1, Sam Halliday wrote:
>
> PR it is
>
> https://github.com/akka/akka/pull/17571
>
> Note that your scalariform version is old and hence conflicts with 
> anything using a more recent version (including ensime-sbt).
>
>
> On Tuesday, 26 May 2015 10:59:57 UTC+1, Patrik Nordwall wrote:
>>
>>
>>
>> On Tue, May 26, 2015 at 11:31 AM, Sam Halliday  
>> wrote:
>>
>>> Thanks Patrick!
>>>
>>> DeadLetterSuppression is indeed what I need to add. Unfortunately, it's 
>>> on one of your messages. Can you please add it to Tcp.Close?
>>>
>>>
>>> https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/io/Tcp.scala#L189
>>>
>>> Or do you really want a PR for this one line change?
>>>
>>
>> It would be awesome if the community would be able to help us spot these 
>> things. Preferably with a pull request or else please create an issue 
>> <https://github.com/akka/akka/issues/new>.
>>  
>>
>>>
>>>
>>> On Tuesday, 26 May 2015 10:12:47 UTC+1, Patrik Nordwall wrote:
>>>>
>>>> Hi Sam,
>>>>
>>>> You can do that as described here 
>>>> http://doc.akka.io/docs/akka/2.3.11/scala/logging.html#Logging_of_Dead_Letters
>>>>  
>>>> and in the link to the Event Stream from there.
>>>>
>>>> However, we would like to silence logging of messages that are 
>>>> "expected" to go to deadLetters by adding the marker 
>>>> `DeadLetterSuppression` to such messages. It would be great if you (and 
>>>> other users) can open pull requests (and issues) when you find such 
>>>> messages.
>>>>
>>>> Thanks,
>>>> Patrik
>>>>
>>>> On Fri, May 22, 2015 at 6:37 PM, Sam Halliday  
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> Some messages that arrive in dead letters are of no concern at all, 
>>>>> such as `Tcp.Close` (which happens a lot when using Akka IO).
>>>>>
>>>>> Is there any way to filter out the logging of these particular dead 
>>>>> letter messages so that they don't clutter up the log?
>>>>>
>>>>> Best regards,
>>>>> Sam
>>>>>
>>>>> -- 
>>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>> >>>>>>>>>> Check the FAQ: 
>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>> >>>>>>>>>> Search the archives: 
>>>>> https://groups.google.com/group/akka-user
>>>>> --- 
>>>>> You received this message because you are subscribed to the Google 
>>>>> Groups "Akka User List" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>> an email to akka-user+...@googlegroups.com.
>>>>> To post to this group, send email to akka...@googlegroups.com.
>>>>> Visit this group at http://groups.google.com/group/akka-user.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>
>>>>
>>>> -- 
>>>>
>>>> Patrik Nordwall
>>>> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
>>>> Twitter: @patriknw
>>>>
>>>>   -- 
>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>> >>>>>>>>>> Check the FAQ: 
>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>> >>>>>>>>>> Search the archives: 
>>> https://groups.google.com/group/akka-user
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to akka-user+...@googlegroups.com.
>>> To post to this group, send email to akka...@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> -- 
>>
>> Patrik Nordwall
>> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
>> Twitter: @patriknw
>>
>>  

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] filter dead letters by message

2015-05-26 Thread Sam Halliday
PR it is

https://github.com/akka/akka/pull/17571

Note that your scalariform version is old and hence conflicts with anything 
using a more recent version (including ensime-sbt).


On Tuesday, 26 May 2015 10:59:57 UTC+1, Patrik Nordwall wrote:
>
>
>
> On Tue, May 26, 2015 at 11:31 AM, Sam Halliday  > wrote:
>
>> Thanks Patrick!
>>
>> DeadLetterSuppression is indeed what I need to add. Unfortunately, it's 
>> on one of your messages. Can you please add it to Tcp.Close?
>>
>>
>> https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/io/Tcp.scala#L189
>>
>> Or do you really want a PR for this one line change?
>>
>
> It would be awesome if the community would be able to help us spot these 
> things. Preferably with a pull request or else please create an issue 
> <https://github.com/akka/akka/issues/new>.
>  
>
>>
>>
>> On Tuesday, 26 May 2015 10:12:47 UTC+1, Patrik Nordwall wrote:
>>>
>>> Hi Sam,
>>>
>>> You can do that as described here 
>>> http://doc.akka.io/docs/akka/2.3.11/scala/logging.html#Logging_of_Dead_Letters
>>>  
>>> and in the link to the Event Stream from there.
>>>
>>> However, we would like to silence logging of messages that are 
>>> "expected" to go to deadLetters by adding the marker 
>>> `DeadLetterSuppression` to such messages. It would be great if you (and 
>>> other users) can open pull requests (and issues) when you find such 
>>> messages.
>>>
>>> Thanks,
>>> Patrik
>>>
>>> On Fri, May 22, 2015 at 6:37 PM, Sam Halliday  
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> Some messages that arrive in dead letters are of no concern at all, 
>>>> such as `Tcp.Close` (which happens a lot when using Akka IO).
>>>>
>>>> Is there any way to filter out the logging of these particular dead 
>>>> letter messages so that they don't clutter up the log?
>>>>
>>>> Best regards,
>>>> Sam
>>>>
>>>> -- 
>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>> >>>>>>>>>> Check the FAQ: 
>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>> >>>>>>>>>> Search the archives: 
>>>> https://groups.google.com/group/akka-user
>>>> --- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Akka User List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to akka-user+...@googlegroups.com.
>>>> To post to this group, send email to akka...@googlegroups.com.
>>>> Visit this group at http://groups.google.com/group/akka-user.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>
>>>
>>> -- 
>>>
>>> Patrik Nordwall
>>> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
>>> Twitter: @patriknw
>>>
>>>   -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
>
>  

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] filter dead letters by message

2015-05-26 Thread Sam Halliday
Thanks Patrick!

DeadLetterSuppression is indeed what I need to add. Unfortunately, it's on 
one of your messages. Can you please add it to Tcp.Close?

https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/io/Tcp.scala#L189

Or do you really want a PR for this one line change?


On Tuesday, 26 May 2015 10:12:47 UTC+1, Patrik Nordwall wrote:
>
> Hi Sam,
>
> You can do that as described here 
> http://doc.akka.io/docs/akka/2.3.11/scala/logging.html#Logging_of_Dead_Letters
>  
> and in the link to the Event Stream from there.
>
> However, we would like to silence logging of messages that are "expected" 
> to go to deadLetters by adding the marker `DeadLetterSuppression` to such 
> messages. It would be great if you (and other users) can open pull requests 
> (and issues) when you find such messages.
>
> Thanks,
> Patrik
>
> On Fri, May 22, 2015 at 6:37 PM, Sam Halliday  > wrote:
>
>> Hi all,
>>
>> Some messages that arrive in dead letters are of no concern at all, such 
>> as `Tcp.Close` (which happens a lot when using Akka IO).
>>
>> Is there any way to filter out the logging of these particular dead 
>> letter messages so that they don't clutter up the log?
>>
>> Best regards,
>> Sam
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
>
>  

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Migrating from wandoulabs websockets to stock akka-io

2015-05-23 Thread Sam Halliday
Hi all,

I'm very excited that akka-io now has WebSocket support.

In ENSIME, we're planning on using this wrapper over wandoulab's websockets

  https://github.com/smootoo/simple-spray-websockets

to easily create a REST/WebSockets endpoint with JSON marshalling for a 
sealed family, with backpressure.

Smootoo's wrapper works really well, and I have had the pleasure of using 
it in a corporate environment so I trust it to be stable.


For future proofing, it would seem sensible to move to stock akka-io for 
WebSockets, so I'm considering sending a PR to retrofit the wrapper. I have 
a couple of questions about that:

1. does akka-io's HTTP singleton actor support WebSockets now? That was the 
big caveat about using wandoulabs. It means all kinds of workarounds if you 
want to just use HTTP in the same actor system.

2. is there a migration guide for wandoulabs to akka-io? Or would it be 
best just to rewrite the wrapper from scratch on top of akka-io?

3. where is the documentation? This just has a big TODO on it

  
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/websocket-support.html
  
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/routing-dsl/websocket-support.html

I can't even find any examples. I guess the key thing is the handshaking, 
which would mean rewriting this bit (and the corresponding client side 
handshake)

  
https://github.com/smootoo/simple-spray-websockets/blob/master/src/main/scala/org/suecarter/websocket/WebSocket.scala#L167

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] filter dead letters by message

2015-05-22 Thread Sam Halliday
Hi all,

Some messages that arrive in dead letters are of no concern at all, such as 
`Tcp.Close` (which happens a lot when using Akka IO).

Is there any way to filter out the logging of these particular dead letter 
messages so that they don't clutter up the log?

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] kernel parameters needed to use Akka IO

2015-03-25 Thread Sam Halliday
Thanks Patrik,

Yeah, very weird. I also tried to reproduce with big ByteStrings but was 
unable to do so ... even on the failing machine. But very consistently 
failed in my application.

The intention was always to use backpressure, and hopefully that will 
reduce the probability of the error occurring again.

Best regards,
Sam


On Wednesday, 25 March 2015 13:02:31 UTC, Patrik Nordwall wrote:
>
>
>
> On Thu, Mar 19, 2015 at 12:42 PM, Sam Halliday  > wrote:
>
>> Hi all,
>>
>> We are using Spray IO (with the wandoulabs WebSockets layer) on really 
>> old RHEL5 boxes in our QA environments.
>>
>> Bizarrely, our server beta release was working fine on one box, but 
>> failing to write messages on another, despite the kernels and software 
>> versions being identical.
>>
>> Clients were able to connect to the server, but as soon as the server 
>> started to write to the socket, we got this sort of thing:
>>
>> 19 Mar 15 10:56:55.542 
>> HttpServerConnectionakka://MDES/user/IO-UHTTP/listener-0/0 [ 
>> MDES-akka.actor.default-dispatcher-4] WARN  - CommandFailed for Tcp.Write 
>> text frame:  ...
>> 19 Mar 15 10:56:55.543 
>> HttpServerConnectionakka://MDES/user/IO-UHTTP/listener-0/0 [ 
>> MDES-akka.actor.default-dispatcher-4] WARN  - event pipeline: dropped 
>> CommandFailed(Write(ByteString(),NoAck(null)))
>>
>>
>> The boxes are running "Red Hat Enterprise Linux Server release 5.8 
>> (Tikanga)" with 2.6.18-308.el5 on x86_64 cores. We're using scala 2.11.5, 
>> Java 1.6.0_40 and Akka 2.3.8 / Spray-IO 1.3.2.
>>
>> We spotted that the kernel parameters were different on the boxes, this 
>> being the diff:
>>
>> net.core.rmem_default = 262144
>> net.core.rmem_max = 16777216
>> net.core.wmem_default = 262144
>> net.core.wmem_max = 16777216
>> net.ipv4.tcp_rmem = 4096 4194304 16777216
>> net.ipv4.tcp_sack = 1
>> net.ipv4.tcp_timestamps = 1
>> net.ipv4.tcp_window_scaling = 1
>> net.ipv4.tcp_wmem = 4096 4194304 16777216
>>
>> and by using those parameters on all boxes the problems went away.
>>
>> However, we have no control over setting these parameters on PROD boxes 
>> so we need a workaround that works without them.
>>
>> But this is extremely concerning, why was the failure happening because 
>> of kernel parameters? Is this a bug in NIO, Spray IO, or Spray-WebSockets? 
>> Wandoulabs aren't doing anything unusual as you can see 
>> https://github.com/wandoulabs/spray-websocket/blob/master/spray-websocket/src/main/scala/spray/can/server/UpgradableHttpListener.scala
>>
>> Most importantly, we need a workaround... does anybody have any 
>> suggestions?
>>
>> The current theory is that the default kernel buffer size is too low to 
>> accept the outbound WebSocket frames. 
>>
>
> That sounds like an explanation. 
>
> I did some testing with sending large ByteString with akka.io.Tcp. It 
> splits it up in chunks of the size of the configured direct-buffer-size, 
> but no CommandFailed.
>
> I'm afraid I don't know why it fails with your kernel settings. You might 
> get a better answer at the Spray mailing list.
>
> Regards,
> Patrik
>
>  
>
>> On the failing boxes (which we can't change), this is
>>
>> $ cat /proc/sys/net/ipv4/tcp_wmem 
>> 4096 16384 4194304
>>
>> and our messages are a few kb each of JSON.
>>
>> Best regards,
>> Sam
>>
>>  -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
>
>  

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Spray IO chatty ConnectionHandler

2015-03-20 Thread Sam Halliday
Hi all,

In Spray IO, the ConnectionHandler.baseCommandPipeline is extremely chatty 
when using backpressure.

  //# final-stages
  def baseCommandPipeline(tcpConnection: ActorRef): Pipeline[Command] = {
case x @ (_: Tcp.WriteCommand | _: Tcp.CloseCommand) ⇒ tcpConnection ! x
case Pipeline.Tell(receiver, msg, sender) ⇒ receiver.tell(msg, sender)
case x @ (Tcp.SuspendReading | Tcp.ResumeReading | Tcp.ResumeWriting) ⇒ 
tcpConnection ! x
case _: Droppable ⇒ // don't warn
case cmd ⇒ log.warning("command pipeline: dropped {}", cmd)
  }

but receiving a user-land Ack seems a perfectly reasonable thing to expect.

Am I seeing a real error or is this class just really chatty (given that 
unhandled messages are usually logged at debug, going to WARN is very 
chatty indeed!)

Do my Acks maybe need to implement Droppable until this is all rolled into 
Akka HTTP?

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: kernel parameters needed to use Akka IO

2015-03-19 Thread Sam Halliday
On Thursday, 19 March 2015 11:42:21 UTC, Sam Halliday wrote:
>
> The boxes are running "Red Hat Enterprise Linux Server release 5.8 
> (Tikanga)" with 2.6.18-308.el5 on x86_64 cores. We're using scala 2.11.5, 
> Java 1.6.0_40 and Akka 2.3.8 / Spray-IO 1.3.2.
>

Sorry I meant Java 1.8.0_40. 

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] kernel parameters needed to use Akka IO

2015-03-19 Thread Sam Halliday
Hi all,

We are using Spray IO (with the wandoulabs WebSockets layer) on really old 
RHEL5 boxes in our QA environments.

Bizarrely, our server beta release was working fine on one box, but failing 
to write messages on another, despite the kernels and software versions 
being identical.

Clients were able to connect to the server, but as soon as the server 
started to write to the socket, we got this sort of thing:

19 Mar 15 10:56:55.542 
HttpServerConnectionakka://MDES/user/IO-UHTTP/listener-0/0 [ 
MDES-akka.actor.default-dispatcher-4] WARN  - CommandFailed for Tcp.Write 
text frame:  ...
19 Mar 15 10:56:55.543 
HttpServerConnectionakka://MDES/user/IO-UHTTP/listener-0/0 [ 
MDES-akka.actor.default-dispatcher-4] WARN  - event pipeline: dropped 
CommandFailed(Write(ByteString(),NoAck(null)))


The boxes are running "Red Hat Enterprise Linux Server release 5.8 
(Tikanga)" with 2.6.18-308.el5 on x86_64 cores. We're using scala 2.11.5, 
Java 1.6.0_40 and Akka 2.3.8 / Spray-IO 1.3.2.

We spotted that the kernel parameters were different on the boxes, this 
being the diff:

net.core.rmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 4194304 16777216
net.ipv4.tcp_sack = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_wmem = 4096 4194304 16777216

and by using those parameters on all boxes the problems went away.

However, we have no control over setting these parameters on PROD boxes so 
we need a workaround that works without them.

But this is extremely concerning, why was the failure happening because of 
kernel parameters? Is this a bug in NIO, Spray IO, or Spray-WebSockets? 
Wandoulabs aren't doing anything unusual as you can see 
https://github.com/wandoulabs/spray-websocket/blob/master/spray-websocket/src/main/scala/spray/can/server/UpgradableHttpListener.scala

Most importantly, we need a workaround... does anybody have any suggestions?

The current theory is that the default kernel buffer size is too low to 
accept the outbound WebSocket frames. On the failing boxes (which we can't 
change), this is

$ cat /proc/sys/net/ipv4/tcp_wmem 
4096 16384 4194304

and our messages are a few kb each of JSON.

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Stop TestActorRefs that fail?

2015-03-02 Thread Sam Halliday
Lacking the ability to use TestActorRef, I had to use this horrible thing. 
It would be nice to be able to set the strategy of the Testkit equivalent 
of the Guardian to be Stopping.

// primarily for use in testing, where the default strategy is to
// Restart and we typically want to Stop, and we can't use
// TestActorRef because we have a Stash.
def actorOfWithSupervisor(
  props: Props,
  grandchildName: Option[String] = None
)(
  strategy: SupervisorStrategy,
  supervisorName: Option[String] = None
): ActorRef = {
  val Magic = "supervised"
  val supervisorProps = Props(new Actor {
var grandchild: ActorRef = _
override def preStart(): Unit = {
  grandchild = supervisorName match {
case Some(name) => context.actorOf(props, name)
case None => context.actorOf(props)
  }
}

def receive = {
  case Magic => sender ! grandchild
}
override def supervisorStrategy = strategy
  })
  val supervisor = supervisorName match {
case Some(name) => system.actorOf(supervisorProps, name)
case None => system.actorOf(supervisorProps)
  }
  import akka.pattern.ask
  implicit val AskTimeout = Timeout(1 minute)
  val grandchild = (supervisor ? Magic).mapTo[ActorRef]
  Await.result(grandchild, Duration.Inf)
}




On Friday, 27 February 2015 14:47:49 UTC, Sam Halliday wrote:
>
> Aah, yes, good spot. Let's also add some type safety:
>
> /**
>  * `TestActorRef`s are restarted by default on exceptions, but this
>  * just stops them.
>  *
>  * https://groups.google.com/forum/#!topic/akka-user/0Ene7WaDyng
>  */
> trait StoppingTestActorRefs {
>   this: TestKit =>
>
>   private class StoppingSupervisor extends Actor {
> def receive = Actor.emptyBehavior
> override def supervisorStrategy = SupervisorStrategy.stoppingStrategy
>   }
>   private val supervisor = system.actorOf(Props[StoppingSupervisor])
>   private def randomName = UUID.randomUUID().toString.replace("-","")
>
>   def StoppingTestActorRef[T <: Actor : ClassTag](props: Props) =
> TestActorRef[T](props, supervisor, randomName)
> }
>
>
> Now it's not even a one liner change in the tests to get the behaviour I 
> want :-)
>
>
> On Friday, 27 February 2015 13:08:05 UTC, rkuhn wrote:
>>
>> Hi Sam,
>>
>> yes, that should work as intended; you can shave two lines off by using 
>> SupervisorStrategy.stoppingStrategy.
>>
>> Regards,
>>
>> Roland
>>
>> 27 feb 2015 kl. 11:37 skrev Sam Halliday :
>>
>> My hack:
>>
>> /**
>>  * `TestActorRef`s are restarted by default on exceptions, but this
>>  * just stops them.
>>  *
>>  * https://groups.google.com/forum/#!topic/akka-user/0Ene7WaDyng
>>  */
>> class StoppingSupervisor extends Actor {
>>   def receive = Actor.emptyBehavior
>>   import SupervisorStrategy._
>>   override def supervisorStrategy: SupervisorStrategy = 
>> OneForOneStrategy()({
>> case _: Exception => Stop
>>   })
>> }
>> trait StoppingTestActorRefs {
>>   this: TestKit =>
>>
>>   private val supervisor = system.actorOf(Props[StoppingSupervisor])
>>   private def randomName = UUID.randomUUID().toString.replace("-","")
>>
>>   def StoppingTestActorRef(props: Props) =
>> TestActorRef.apply(props, supervisor, randomName)
>>
>> }
>>
>>
>> On Thursday, 26 February 2015 16:24:44 UTC, Sam Halliday wrote:
>>>
>>> Hi,
>>>
>>> I am testing an Actor and mocking out one of its dependencies. When the 
>>> test fails, e.g. an unexpected call to the mock (which causes an exception) 
>>> the Actor is restarted and all I end up with is several GB of logging 
>>> because it gets into an infinite restart loop.
>>>
>>> Is there any *simple* way (e.g. an extra parameter somewhere) that lets 
>>> me change the supervisor strategy of TestActorRefs to Stop instead of 
>>> Restart? (I don't really want to have to wrap it in a monitor as that would 
>>> be a pain).
>>>
>>> Best regards,
>>> Sam
>>>
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscr

[akka-user] Re: unstashAll() not queuing messages until actor death

2015-03-02 Thread Sam Halliday
Grr, so it seems that TestActorRef really is incompatible with Stash.

  http://stackoverflow.com/questions/18335127

This is a real shame.

On Monday, 2 March 2015 12:35:22 UTC, Sam Halliday wrote:
>
> Wow, I think I figured this out... and it's quite scary.
>
> Stash needs an unbounded mailbox. I didn't quite pick up on this because 
> somebody in our team had made the default dispatcher an unbounded mailbox 
> at some point and now there is no going back.
>
> Then when I wanted to use TestActorRef, it blew up when I tried to create 
> my actor because it uses an unbounded mailbox, so I had to give it a 
> bounded mailbox to pass the test.
>
> *That seems to have disabled Stash without any warning* for this Actor.
>
> There seems to be several funky things going on here:
>
> 1. why doesn't initialisation blow up when I have an Actor with Stash used 
> with an unbounded mailbox?
> 2. why is TestActorRef enforcing that unbounded mailboxes cannot be used? 
> --- doesn't this mean that TestActorRef cannot be used to test an Actor 
> with Stash?
>
>
>
> On Monday, 2 March 2015 12:13:40 UTC, Sam Halliday wrote:
>>
>> Looks like I'm seeing the same thing as
>>
>>   https://groups.google.com/d/msg/akka-user/Vb2dQtZX6DI/lZtEvg8DwkkJ
>>
>> but I see the "Awaken" message immediately ... which means the 
>> `unstashAll` simply hasn't added anything onto the actor queue (until after 
>> the actor dies?)
>>
>> On Monday, 2 March 2015 12:05:52 UTC, Sam Halliday wrote:
>>>
>>> Hi all,
>>>
>>> I have an Actor which is using Stash and is working through two 
>>> different Receive behaviours with context.become. Messages are stashed 
>>> until we're ready to receive them.
>>>
>>> I've used context.become(), stash() and unstashAll() several times, so 
>>> I'm pretty familiar with them, but I'm seeing a bizarre behaviour in this 
>>> Actor and I was wondering if I'm using it wrong in a subtle way.
>>>
>>> In the part of the Actor code that is causing the problem, I am doing 
>>> this:
>>>
>>>   def catchUp: Receive = LoggingReceive {
>>> case item: PublishItem =>
>>>   stash()
>>> case event: Event =>
>>>   context.become(republishing)
>>>   unstashAll()
>>>   }
>>>
>>>   def republishing: Receive = LoggingReceive {
>>> case event: PublishItem =>
>>>   downstream ! event
>>> case e: Event => // ignore leftover messages from the dying loader
>>>   }
>>>
>>>
>>> In my test I send a bunch of PublishItem's, then send the Event that 
>>> changes the behaviour. Then I assert that 'downstream' receives all the 
>>> backlogged PublishItems.
>>>
>>> However, what I'm actually seeing are "handled event"s for the 
>>> PublishItem and Event into the catchUp behaviour. Then I see the test 
>>> timing out waiting for the PublishItem to appear in downstream and then 
>>> (the really weird bit) the actor is stopped and AFTER this actor stops, all 
>>> the PublishItems show up in the dead letters mailbox.
>>>
>>> Any ideas?
>>>
>>> As a workaround, I am managing my own stash as a Queue, but this sucks 
>>> because my stashed messages arrive after other messages that are already 
>>> queued and that breaks the assumed logic of the whole actor.
>>>
>>> I'm using akka 2.3.9 on scala 2.11.5
>>>
>>> Best regards,
>>> Sam
>>>
>>>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: unstashAll() not queuing messages until actor death

2015-03-02 Thread Sam Halliday
Wow, I think I figured this out... and it's quite scary.

Stash needs an unbounded mailbox. I didn't quite pick up on this because 
somebody in our team had made the default dispatcher an unbounded mailbox 
at some point and now there is no going back.

Then when I wanted to use TestActorRef, it blew up when I tried to create 
my actor because it uses an unbounded mailbox, so I had to give it a 
bounded mailbox to pass the test.

*That seems to have disabled Stash without any warning* for this Actor.

There seems to be several funky things going on here:

1. why doesn't initialisation blow up when I have an Actor with Stash used 
with an unbounded mailbox?
2. why is TestActorRef enforcing that unbounded mailboxes cannot be used? 
--- doesn't this mean that TestActorRef cannot be used to test an Actor 
with Stash?



On Monday, 2 March 2015 12:13:40 UTC, Sam Halliday wrote:
>
> Looks like I'm seeing the same thing as
>
>   https://groups.google.com/d/msg/akka-user/Vb2dQtZX6DI/lZtEvg8DwkkJ
>
> but I see the "Awaken" message immediately ... which means the 
> `unstashAll` simply hasn't added anything onto the actor queue (until after 
> the actor dies?)
>
> On Monday, 2 March 2015 12:05:52 UTC, Sam Halliday wrote:
>>
>> Hi all,
>>
>> I have an Actor which is using Stash and is working through two different 
>> Receive behaviours with context.become. Messages are stashed until we're 
>> ready to receive them.
>>
>> I've used context.become(), stash() and unstashAll() several times, so 
>> I'm pretty familiar with them, but I'm seeing a bizarre behaviour in this 
>> Actor and I was wondering if I'm using it wrong in a subtle way.
>>
>> In the part of the Actor code that is causing the problem, I am doing 
>> this:
>>
>>   def catchUp: Receive = LoggingReceive {
>> case item: PublishItem =>
>>   stash()
>> case event: Event =>
>>   context.become(republishing)
>>   unstashAll()
>>   }
>>
>>   def republishing: Receive = LoggingReceive {
>> case event: PublishItem =>
>>   downstream ! event
>> case e: Event => // ignore leftover messages from the dying loader
>>   }
>>
>>
>> In my test I send a bunch of PublishItem's, then send the Event that 
>> changes the behaviour. Then I assert that 'downstream' receives all the 
>> backlogged PublishItems.
>>
>> However, what I'm actually seeing are "handled event"s for the 
>> PublishItem and Event into the catchUp behaviour. Then I see the test 
>> timing out waiting for the PublishItem to appear in downstream and then 
>> (the really weird bit) the actor is stopped and AFTER this actor stops, all 
>> the PublishItems show up in the dead letters mailbox.
>>
>> Any ideas?
>>
>> As a workaround, I am managing my own stash as a Queue, but this sucks 
>> because my stashed messages arrive after other messages that are already 
>> queued and that breaks the assumed logic of the whole actor.
>>
>> I'm using akka 2.3.9 on scala 2.11.5
>>
>> Best regards,
>> Sam
>>
>>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: unstashAll() not queuing messages until actor death

2015-03-02 Thread Sam Halliday
Looks like I'm seeing the same thing as

  https://groups.google.com/d/msg/akka-user/Vb2dQtZX6DI/lZtEvg8DwkkJ

but I see the "Awaken" message immediately ... which means the `unstashAll` 
simply hasn't added anything onto the actor queue (until after the actor 
dies?)

On Monday, 2 March 2015 12:05:52 UTC, Sam Halliday wrote:
>
> Hi all,
>
> I have an Actor which is using Stash and is working through two different 
> Receive behaviours with context.become. Messages are stashed until we're 
> ready to receive them.
>
> I've used context.become(), stash() and unstashAll() several times, so I'm 
> pretty familiar with them, but I'm seeing a bizarre behaviour in this Actor 
> and I was wondering if I'm using it wrong in a subtle way.
>
> In the part of the Actor code that is causing the problem, I am doing this:
>
>   def catchUp: Receive = LoggingReceive {
> case item: PublishItem =>
>   stash()
> case event: Event =>
>   context.become(republishing)
>   unstashAll()
>   }
>
>   def republishing: Receive = LoggingReceive {
> case event: PublishItem =>
>   downstream ! event
> case e: Event => // ignore leftover messages from the dying loader
>   }
>
>
> In my test I send a bunch of PublishItem's, then send the Event that 
> changes the behaviour. Then I assert that 'downstream' receives all the 
> backlogged PublishItems.
>
> However, what I'm actually seeing are "handled event"s for the PublishItem 
> and Event into the catchUp behaviour. Then I see the test timing out 
> waiting for the PublishItem to appear in downstream and then (the really 
> weird bit) the actor is stopped and AFTER this actor stops, all the 
> PublishItems show up in the dead letters mailbox.
>
> Any ideas?
>
> As a workaround, I am managing my own stash as a Queue, but this sucks 
> because my stashed messages arrive after other messages that are already 
> queued and that breaks the assumed logic of the whole actor.
>
> I'm using akka 2.3.9 on scala 2.11.5
>
> Best regards,
> Sam
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] unstashAll() not queuing messages until actor death

2015-03-02 Thread Sam Halliday
Hi all,

I have an Actor which is using Stash and is working through two different 
Receive behaviours with context.become. Messages are stashed until we're 
ready to receive them.

I've used context.become(), stash() and unstashAll() several times, so I'm 
pretty familiar with them, but I'm seeing a bizarre behaviour in this Actor 
and I was wondering if I'm using it wrong in a subtle way.

In the part of the Actor code that is causing the problem, I am doing this:

  def catchUp: Receive = LoggingReceive {
case item: PublishItem =>
  stash()
case event: Event =>
  context.become(republishing)
  unstashAll()
  }

  def republishing: Receive = LoggingReceive {
case event: PublishItem =>
  downstream ! event
case e: Event => // ignore leftover messages from the dying loader
  }


In my test I send a bunch of PublishItem's, then send the Event that 
changes the behaviour. Then I assert that 'downstream' receives all the 
backlogged PublishItems.

However, what I'm actually seeing are "handled event"s for the PublishItem 
and Event into the catchUp behaviour. Then I see the test timing out 
waiting for the PublishItem to appear in downstream and then (the really 
weird bit) the actor is stopped and AFTER this actor stops, all the 
PublishItems show up in the dead letters mailbox.

Any ideas?

As a workaround, I am managing my own stash as a Queue, but this sucks 
because my stashed messages arrive after other messages that are already 
queued and that breaks the assumed logic of the whole actor.

I'm using akka 2.3.9 on scala 2.11.5

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Stop TestActorRefs that fail?

2015-02-27 Thread Sam Halliday
Aah, yes, good spot. Let's also add some type safety:

/**
 * `TestActorRef`s are restarted by default on exceptions, but this
 * just stops them.
 *
 * https://groups.google.com/forum/#!topic/akka-user/0Ene7WaDyng
 */
trait StoppingTestActorRefs {
  this: TestKit =>

  private class StoppingSupervisor extends Actor {
def receive = Actor.emptyBehavior
override def supervisorStrategy = SupervisorStrategy.stoppingStrategy
  }
  private val supervisor = system.actorOf(Props[StoppingSupervisor])
  private def randomName = UUID.randomUUID().toString.replace("-","")

  def StoppingTestActorRef[T <: Actor : ClassTag](props: Props) =
TestActorRef[T](props, supervisor, randomName)
}


Now it's not even a one liner change in the tests to get the behaviour I 
want :-)


On Friday, 27 February 2015 13:08:05 UTC, rkuhn wrote:
>
> Hi Sam,
>
> yes, that should work as intended; you can shave two lines off by using 
> SupervisorStrategy.stoppingStrategy.
>
> Regards,
>
> Roland
>
> 27 feb 2015 kl. 11:37 skrev Sam Halliday  >:
>
> My hack:
>
> /**
>  * `TestActorRef`s are restarted by default on exceptions, but this
>  * just stops them.
>  *
>  * https://groups.google.com/forum/#!topic/akka-user/0Ene7WaDyng
>  */
> class StoppingSupervisor extends Actor {
>   def receive = Actor.emptyBehavior
>   import SupervisorStrategy._
>   override def supervisorStrategy: SupervisorStrategy = 
> OneForOneStrategy()({
> case _: Exception => Stop
>   })
> }
> trait StoppingTestActorRefs {
>   this: TestKit =>
>
>   private val supervisor = system.actorOf(Props[StoppingSupervisor])
>   private def randomName = UUID.randomUUID().toString.replace("-","")
>
>   def StoppingTestActorRef(props: Props) =
> TestActorRef.apply(props, supervisor, randomName)
>
> }
>
>
> On Thursday, 26 February 2015 16:24:44 UTC, Sam Halliday wrote:
>>
>> Hi,
>>
>> I am testing an Actor and mocking out one of its dependencies. When the 
>> test fails, e.g. an unexpected call to the mock (which causes an exception) 
>> the Actor is restarted and all I end up with is several GB of logging 
>> because it gets into an infinite restart loop.
>>
>> Is there any *simple* way (e.g. an extra parameter somewhere) that lets 
>> me change the supervisor strategy of TestActorRefs to Stop instead of 
>> Restart? (I don't really want to have to wrap it in a monitor as that would 
>> be a pain).
>>
>> Best regards,
>> Sam
>>
>
> -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> *Dr. Roland Kuhn*
> *Akka Tech Lead*
> Typesafe <http://typesafe.com/> – Reactive apps on the JVM.
> twitter: @rolandkuhn
> <http://twitter.com/#!/rolandkuhn>
>  
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Stop TestActorRefs that fail?

2015-02-27 Thread Sam Halliday
My hack:

/**
 * `TestActorRef`s are restarted by default on exceptions, but this
 * just stops them.
 *
 * https://groups.google.com/forum/#!topic/akka-user/0Ene7WaDyng
 */
class StoppingSupervisor extends Actor {
  def receive = Actor.emptyBehavior
  import SupervisorStrategy._
  override def supervisorStrategy: SupervisorStrategy = 
OneForOneStrategy()({
case _: Exception => Stop
  })
}
trait StoppingTestActorRefs {
  this: TestKit =>

  private val supervisor = system.actorOf(Props[StoppingSupervisor])
  private def randomName = UUID.randomUUID().toString.replace("-","")

  def StoppingTestActorRef(props: Props) =
TestActorRef.apply(props, supervisor, randomName)

}


On Thursday, 26 February 2015 16:24:44 UTC, Sam Halliday wrote:
>
> Hi,
>
> I am testing an Actor and mocking out one of its dependencies. When the 
> test fails, e.g. an unexpected call to the mock (which causes an exception) 
> the Actor is restarted and all I end up with is several GB of logging 
> because it gets into an infinite restart loop.
>
> Is there any *simple* way (e.g. an extra parameter somewhere) that lets me 
> change the supervisor strategy of TestActorRefs to Stop instead of Restart? 
> (I don't really want to have to wrap it in a monitor as that would be a 
> pain).
>
> Best regards,
> Sam
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Stop TestActorRefs that fail?

2015-02-26 Thread Sam Halliday
Hi,

I am testing an Actor and mocking out one of its dependencies. When the 
test fails, e.g. an unexpected call to the mock (which causes an exception) 
the Actor is restarted and all I end up with is several GB of logging 
because it gets into an infinite restart loop.

Is there any *simple* way (e.g. an extra parameter somewhere) that lets me 
change the supervisor strategy of TestActorRefs to Stop instead of Restart? 
(I don't really want to have to wrap it in a monitor as that would be a 
pain).

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: backpressure on spray-can (and wandoulab websockets)

2015-02-20 Thread Sam Halliday
To follow up on this, at least for TextFrames, I managed to trick the 
wandoulabs websockets layer into sending an Ack.

Basically, instead of using their recommended API (in the server worker)

 val Pong = TextFrame("PONG")
 ...
send(Pong)

I am bypassing the frame rendering logic and doing this

serverConnection ! Tcp.Write(FrameRender.render(Pong, maskingKey), Ack)

that gives me back an Ack on every write.

It seems a bit of a shame that wandoulabs didn't consider backpressure as a 
fundamental feature - for me it's the whole point of using Akka IO.

I raised a ticket with them [5] but I really hope websockets will be part 
of akka-streams/http 1.1+ as I suspect it'll be implemented somewhat 
differently :-)

[5] https://github.com/wandoulabs/spray-websocket/issues/68

Best regards,
Sam

On Friday, 20 February 2015 17:09:53 UTC, Sam Halliday wrote:
>
> Hi all,
>
> I am trying to enable ACK-based backpressure on a websocket server.
>
> The Akka IO backpressure is really fantastic and easy to understand[1].
>
> The Spray Can docs mention something about automatic backpressure[2], but 
> I can't find any more information about what this means. I presume this is 
> only relevant for writing out HTTP responses, but I'd like to know more 
> about this nonetheless.
>
> The wandoulabs websockets extension to spray-can pretty much drops down to 
> raw TCP once the WebSocket Upgrade request is processed, but their 
> ConnectionManager doesn't take ByteStrings, it takes "FrameCommand extends 
> Tcp.Command" messages. And it's therefore not possible to wrap these with a 
> Tcp.Write. If one follows the usages of FrameCommand, one ends up with 
> FrameRendering[4] which seems to append the Tcp.Write under the hood 
> without an Ack. Therefore it would appear that it order to get 
> backpressure, one needs to trick the middleware into accepting a Tcp.Write 
> (and who knows where the Ack comes back to).
>
> A couple of questions here:
>
> 1. is there any more information on the spray-can backpressure? 
> (presumably this would just be for the HTTP stuff anyway)
> 2. has anybody managed to get backpressure working in wandoulabs 
> websockets?
>
>
>
> [1] 
> http://doc.akka.io/docs/akka/snapshot/scala/io-tcp.html#throttling-reads-and-writes
> [2] http://spray.io/documentation/1.2.2/spray-can/configuration/
> [3] https://github.com/wandoulabs/spray-websocket
> [4] 
> https://github.com/wandoulabs/spray-websocket/blob/master/spray-websocket/src/main/scala/spray/can/websocket/FrameRendering.scala#L38
>
> Best regards,
> Sam
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] backpressure on spray-can (and wandoulab websockets)

2015-02-20 Thread Sam Halliday
Hi all,

I am trying to enable ACK-based backpressure on a websocket server.

The Akka IO backpressure is really fantastic and easy to understand[1].

The Spray Can docs mention something about automatic backpressure[2], but I 
can't find any more information about what this means. I presume this is 
only relevant for writing out HTTP responses, but I'd like to know more 
about this nonetheless.

The wandoulabs websockets extension to spray-can pretty much drops down to 
raw TCP once the WebSocket Upgrade request is processed, but their 
ConnectionManager doesn't take ByteStrings, it takes "FrameCommand extends 
Tcp.Command" messages. And it's therefore not possible to wrap these with a 
Tcp.Write. If one follows the usages of FrameCommand, one ends up with 
FrameRendering[4] which seems to append the Tcp.Write under the hood 
without an Ack. Therefore it would appear that it order to get 
backpressure, one needs to trick the middleware into accepting a Tcp.Write 
(and who knows where the Ack comes back to).

A couple of questions here:

1. is there any more information on the spray-can backpressure? (presumably 
this would just be for the HTTP stuff anyway)
2. has anybody managed to get backpressure working in wandoulabs websockets?



[1] 
http://doc.akka.io/docs/akka/snapshot/scala/io-tcp.html#throttling-reads-and-writes
[2] http://spray.io/documentation/1.2.2/spray-can/configuration/
[3] https://github.com/wandoulabs/spray-websocket
[4] 
https://github.com/wandoulabs/spray-websocket/blob/master/spray-websocket/src/main/scala/spray/can/websocket/FrameRendering.scala#L38

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] way to get ActorContext from ActorSystem?

2015-02-19 Thread Sam Halliday
On Thursday, 19 February 2015 16:54:47 UTC, Heiko Seeberger wrote:
>
> What about using an ActorRefFactory?
>

Yeah, I thought about that but I also need an ActorSystem (it's using 
akka.io) and typically when you need both an ActorSystem and an 
ActorRefFactory the implicit resolution gets a bit nasty on the caller side.

 

> --
>
> *Heiko Seeberger*
> Home: heikoseeberger.de
> Twitter: @hseeberger <https://twitter.com/hseeberger>
> Public key: keybase.io/hseeberger
>  
> On 19 Feb 2015, at 17:49, Sam Halliday > 
> wrote:
>
> Hi all,
>
> I wrote a little convenience method that starts up a bunch of actors, and 
> I intentionally take in an ActorContext because this is typically used with 
> a supervisor strategy.
>
> However, when I'm testing it (and also writing examples for people to 
> easily understand the API), it would be incredibly convenient if the 
> ActorContext that spawns the actors was the Guardian.
>
> Is there any way to do this, or must I write a little dumb supervisor 
> (such as below) to do this? I tried searching in TestKit but didn't come up 
> with anything.
>
>class DumbSupervisorActor extends Actor {
> override def preStart(): Unit = {
>   startAllTheThings()
> }
> def receive = Actor.emptyBehavior
>   }
>
> Best regards,
> Sam
>
> -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] way to get ActorContext from ActorSystem?

2015-02-19 Thread Sam Halliday
Hi all,

I wrote a little convenience method that starts up a bunch of actors, and I 
intentionally take in an ActorContext because this is typically used with a 
supervisor strategy.

However, when I'm testing it (and also writing examples for people to 
easily understand the API), it would be incredibly convenient if the 
ActorContext that spawns the actors was the Guardian.

Is there any way to do this, or must I write a little dumb supervisor (such 
as below) to do this? I tried searching in TestKit but didn't come up with 
anything.

   class DumbSupervisorActor extends Actor {
override def preStart(): Unit = {
  startAllTheThings()
}
def receive = Actor.emptyBehavior
  }

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] confused about state of HTTPS

2015-02-19 Thread Sam Halliday
Hi all,

I read 
here 
http://spray.io/documentation/1.1-SNAPSHOT/spray-can/http-server/#ssl-support 
that there is an implicit ServerSSLEngineProvider on Http.Bind


but the latest HTTP docs 
http://doc.akka.io/docs/akka-stream-and-http-experimental/current/scala/http/https.html
 
say "is not yet supported" (not just that the docs have to be written).

which am I to believe?


Best regards,

Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: sent/received messages excluding the deathwatch heartbeats?

2015-01-22 Thread Sam Halliday
Thanks Patrik,

That's always an option with logback, but I'd rather not have to go there.

Grep in this case is not an option as the logs are coming through in a
Jenkins console and very noisy. I'd like to be able to just read it
off. The custom event handler sounds like it might be a good option
but I'd also like you to consider making logging heartbeat messages
optional as the value in seeing them is really rather low compared to
being able to see our domain messages.

Best regards,
Sam

On 21 January 2015 at 21:37, Patrik Nordwall  wrote:
> You could also use slf4j logger and define the filter in the logback config
> (or whatever impl you prefer).
>
> An alternative; use grep/egrep to filter out the relevant information from
> the log file after the fact.
>
> /Patrik
>
> 21 jan 2015 kl. 17:29 skrev Martynas Mickevičius
> :
>
> Hi Sam,
>
> you can use custom event listener to display the log messages that you are
> interested in.
>
> Here you can find a simple example demonstrating that. And there is a
> documentation section explaining this.
>
> On Mon, Jan 19, 2015 at 11:02 AM, Sam Halliday 
> wrote:
>>
>> Is this something that would be suitable for an RFE?
>>
>> More generally it would be good to have an option to only log userland
>> messages in remoting.
>>
>>
>> On Friday, 16 January 2015 17:41:55 UTC, Sam Halliday wrote:
>>>
>>> Hi all,
>>>
>>> I'm trying to debug a remoting issue and I need to look at the logs of
>>> the messages that are sent.
>>>
>>> However, the logs are completely filled up with the send/receive of
>>> deathwatch messages.
>>>
>>> Without turning off deathwatch, is there any way to exclude it from the
>>> logs?
>>>
>>> Latest akka stable, logging config looks like
>>>
>>> akka {
>>>   remote {
>>> log-sent-messages = on
>>> log-received-messages = on
>>> log-frame-size-exceeding = 1000b
>>>
>>> log-remote-lifecycle-events = off
>>>
>>> watch-failure-detector {
>>>   # Our timeouts can be so high, the PHI model basically doesn't
>>>   # know what to do, so just use a single timeout.
>>>   # http://doc.akka.io/docs/akka/snapshot/scala/remoting.html
>>>   acceptable-heartbeat-pause = 60 s
>>> }
>>>   }
>>>
>>>   actor {
>>> debug {
>>>   receive = on
>>>   #autoreceive = on
>>>   #lifecycle = on
>>>   #fsm = on
>>>   #event-stream = on
>>>   unhandled = on
>>> }
>>> }
>>>
>> --
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ:
>> >>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> Martynas Mickevičius
> Typesafe – Reactive Apps on the JVM
>
> --
>>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>>> Check the FAQ:
>>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
> --
>>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>>> Check the FAQ:
>>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>&

Re: [akka-user] Feedback on Akka Streams 1.0-M2 Documentation

2015-01-22 Thread Sam Halliday
On Wednesday, 21 January 2015 08:21:15 UTC, drewhk wrote:
>> I believe it may be possible to use the current 1.0-M2 to address
>> my bugbear by using the Actor integration to write an actor that
>> has N instances behind a router, but it feels hacky to have to
>> use an Actor at all. What is really missing is a Junction that
>> multiplies the next Flow into N parallel parts that run on
>> separate threads.
>
> 
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-cookbook.html#Balancing_jobs_to_a_fixed_pool_of_workers

I actually missed this when reading the docs... it's a gem buried
in a sea of examples! :-) Is there anything like this in the "top
down" style documentation?

A convenience method to set up this sort of thing is exactly what
I mean. I should imagine that fanning out a Flow for
embarrasingly parallel processing is a common enough pattern that
one would want to be able to do this in a one liner. You note
something about work in this area later on (quoted out of order):

> In the future we will allow users to add explicit markers where
> the materializer needs to cut up chains/graphs into concurrent
> entities.

This sounds very good. Is there a ticket I can subscribe to for
updates? Is there a working name for the materializer so that I
know what to look out for?


> Also, you can try mapAsync or mapAsyncUnordered for similar
> tasks.

It would be good to read some discussion about these that goes
further than the API docs. Do they just create a Future and
therefore have no way to tell a fast producer to slow down? How
does one achieve pushback from these? Pushback on evaluation of
the result is essential, not on the ability to create/schedule
the futures. I would very like to see some documentation that
explains where this has an advantage over plain 'ol Scala Stream
with a map{Future{...}}.


>> In general, I felt that the documentation was missing any
>> commentary on concurrency and parallelisation. I was left
>> wondering what thread everything was happening on.
>
> ... as the default all stream processing element is backed by
>  an actor ...

The very fact that each component is backed by an Actor is news
to me. This wasn't at all obvious from the docs and actually
the "integration with actors" section made me think that streams
must be implemented completely differently if it needs an
integration layer! Actually, the "actor integration" really
means "low level streaming actors", doesn't it? I would strongly
recommend making this a lot clearer as it helps to de-mystify the
implementation.

Now knowing that each vertex in the graph is backed by an actor,
I wonder if in "balancing work to fixed pools" the Balance is
just adding a router actor with a balance strategy? The
convenience method I suggested above could simply create a router
to multiple instances of the same actor with a parallelism bound.
I'm not entirely sure why one would need a Merge strategy for
that, although the option to preserve input order at the output
would be good for some circumstances (which could apply pushback
in preference to growing the queue of out-of-order results).

In addition, this revelation helps me to understand the
performance considerations of using akka-streams. Knowing this,
it would only be appropriate for something I would've considered
suitable (from a performance point of view) for hand-crafted flow
control in akka before streams was released. The main advantage
of akka-streams is therefore that it has dramatically lowered the
barrier of entry for writing Actor code with flow control.



Thanks for this explanation Endre, I hope to see even more
innovation in the coming releases and improved documentation.

Best regards,
Sam 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: sent/received messages excluding the deathwatch heartbeats?

2015-01-22 Thread Sam Halliday
Thanks Martynas,

It's a shame that this would be needed. It feels very much like the 
heartbeats should be considered part of the system layer when it comes to 
logging, and therefore already filtered. Seeing heartbeats when debugging 
makes it incredibly difficult to see anything.

My concern with a custom logger like this is that it will quickly get out 
of sync with the stock one and I might be missing out on various 
performance/bug fixes as akka is released. That said, the ability to see 
genuine messages in the logs when debugging is invaluable, so it might be 
worth it.

On Wednesday, 21 January 2015 16:29:07 UTC, Martynas Mickevičius wrote:
>
> Hi Sam,
>
> you can use custom event listener to display the log messages that you are 
> interested in.
>
> Here 
> <https://github.com/2m/akka-remote-sandbox/blob/03b4eaaa1778fe3d44b7889b3d0c6a63c342b1a1/src/main/scala/LogMessages.scala>
>  you 
> can find a simple example demonstrating that. And there is a documentation 
> <http://doc.akka.io/docs/akka/2.3.9/scala/logging.html#Loggers> section 
> explaining this. 
>
> On Mon, Jan 19, 2015 at 11:02 AM, Sam Halliday  > wrote:
>
>> Is this something that would be suitable for an RFE?
>>
>> More generally it would be good to have an option to only log userland 
>> messages in remoting.
>>
>>
>> On Friday, 16 January 2015 17:41:55 UTC, Sam Halliday wrote:
>>>
>>> Hi all,
>>>
>>> I'm trying to debug a remoting issue and I need to look at the logs of 
>>> the messages that are sent.
>>>
>>> However, the logs are completely filled up with the send/receive of 
>>> deathwatch messages.
>>>
>>> Without turning off deathwatch, is there any way to exclude it from the 
>>> logs?
>>>
>>> Latest akka stable, logging config looks like
>>>
>>> akka {
>>>   remote {
>>> log-sent-messages = on
>>> log-received-messages = on
>>> log-frame-size-exceeding = 1000b
>>>
>>> log-remote-lifecycle-events = off
>>>
>>> watch-failure-detector {
>>>   # Our timeouts can be so high, the PHI model basically doesn't
>>>   # know what to do, so just use a single timeout. 
>>>   # http://doc.akka.io/docs/akka/snapshot/scala/remoting.html
>>>   acceptable-heartbeat-pause = 60 s
>>> }
>>>   }
>>>
>>>   actor {
>>> debug {
>>>   receive = on
>>>   #autoreceive = on
>>>   #lifecycle = on
>>>   #fsm = on
>>>   #event-stream = on
>>>   unhandled = on
>>> }
>>> }
>>>
>>>  -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Martynas Mickevičius
> Typesafe <http://typesafe.com/> – Reactive 
> <http://www.reactivemanifesto.org/> Apps on the JVM
>  

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Feedback on Akka Streams 1.0-M2 Documentation

2015-01-21 Thread Sam Halliday
It's the wording "with merge being the preferred strategy". From your email
it is clear that merge is *not* the strategy used in Akka streams, so
perhaps best to drop this sentence as it confuses more than clarifies.
Instead, it would be instructive to note that a Source is returned and
perhaps talk about the strategy that is used.
On 21 Jan 2015 09:09, "Endre Varga"  wrote:

>
>
> On Wed, Jan 21, 2015 at 10:06 AM, Sam Halliday 
> wrote:
>
>> There was some discussion about merging Seq in the docs.
>>
> If that is so, can you point to that part? We might need to rewrite it to
> clarify what it is about and what it is not. If you find it can you file a
> ticket?
>
> Thanks,
> -Endre
>
>
>
>> Anyway, it is a side point and I don't want to get derailed from my
>> original questions so I will read your other responses later today.
>> On 21 Jan 2015 08:50, "Endre Varga"  wrote:
>>
>>>
>>>
>>> On Wed, Jan 21, 2015 at 9:45 AM, Sam Halliday 
>>> wrote:
>>>
>>>> Err, Endre, yes. And when merging Seq[T] in a mapConcat we would all
>>>> benefit greatly from faster merging, Seq[Seq[T]] => Seq[T]
>>>>
>>> What do you mean? I don't understand. What mapConcat does is that it
>>> takes a Source[T] and a function T => Seq[U] and gives a Source[U]
>>> (similarly with Flow). Source is not a Seq, it is a completely different
>>> beast.
>>>
>>>
>>>
>>>>  On 21 Jan 2015 08:03, "Endre Varga"  wrote:
>>>>
>>>>> Hi Sam,
>>>>>
>>>>> On Tue, Jan 20, 2015 at 9:56 PM, Sam Halliday 
>>>>> wrote:
>>>>>
>>>>>> One more comment on the streams API. It is really cool that you've
>>>>>> thought about using mapConcat instead of flatMap to enable optimised 
>>>>>> merge
>>>>>> operations. I just wanted to draw your attention to a clojure project 
>>>>>> that
>>>>>> does super-fast merging of immutable Tree/Trie structures:
>>>>>> https://github.com/JulesGosnell/seqspert
>>>>>>
>>>>>
>>>>> I feel a misunderstanding here. Akka Streams is not about data
>>>>> structures, it is about streams. I mean real streams, like live TCP
>>>>> incoming bytes. Unfortunately Java also chose the name Stream for another
>>>>> concept, but that is more targeted for collections. In Akka Streams on the
>>>>> other hand streams backed by collections is the rare case, not the common
>>>>> one (although many examples use collections as sources since they are easy
>>>>> to show).
>>>>>
>>>>>  -Endre
>>>>>
>>>>>
>>>>>> The work is definitely portable to the Scala collection types, as
>>>>>> they are based on the Clojure implementations.
>>>>>>
>>>>>>
>>>>>> On Tuesday, 20 January 2015 19:04:45 UTC, Sam Halliday wrote:
>>>>>>>
>>>>>>> Dear Akka Team,
>>>>>>>
>>>>>>> In response to Bjorn's prompting[1], I have read all the Akka
>>>>>>> Streams 1.0-M2 documentation[2].
>>>>>>>
>>>>>>> I am very impressed, and excited! There has clearly been a lot of
>>>>>>> thought put into this framework and I am looking forward to using
>>>>>>> it in anger over the coming years.
>>>>>>>
>>>>>>> Bjorn asked if I felt any examples were missing, and sadly my
>>>>>>> original request (that I've been going on about for years,
>>>>>>> sorry!) is indeed missing. It is the case of a fast producer and
>>>>>>> a slow consumer that is ideal for parallelisation.
>>>>>>>
>>>>>>> I believe it may be possible to use the current 1.0-M2 to address
>>>>>>> my bugbear by using the Actor integration to write an actor that
>>>>>>> has N instances behind a router, but it feels hacky to have to
>>>>>>> use an Actor at all. What is really missing is a Junction that
>>>>>>> multiplies the next Flow into N parallel parts that run on
>>>>>>> separate threads.
>>>>>>>
>>>>>>>
>>>>>>> In general, I felt that 

Re: [akka-user] Re: Feedback on Akka Streams 1.0-M2 Documentation

2015-01-21 Thread Sam Halliday
There was some discussion about merging Seq in the docs. Anyway, it is a
side point and I don't want to get derailed from my original questions so I
will read your other responses later today.
On 21 Jan 2015 08:50, "Endre Varga"  wrote:

>
>
> On Wed, Jan 21, 2015 at 9:45 AM, Sam Halliday 
> wrote:
>
>> Err, Endre, yes. And when merging Seq[T] in a mapConcat we would all
>> benefit greatly from faster merging, Seq[Seq[T]] => Seq[T]
>>
> What do you mean? I don't understand. What mapConcat does is that it takes
> a Source[T] and a function T => Seq[U] and gives a Source[U] (similarly
> with Flow). Source is not a Seq, it is a completely different beast.
>
>
>
>>  On 21 Jan 2015 08:03, "Endre Varga"  wrote:
>>
>>> Hi Sam,
>>>
>>> On Tue, Jan 20, 2015 at 9:56 PM, Sam Halliday 
>>> wrote:
>>>
>>>> One more comment on the streams API. It is really cool that you've
>>>> thought about using mapConcat instead of flatMap to enable optimised merge
>>>> operations. I just wanted to draw your attention to a clojure project that
>>>> does super-fast merging of immutable Tree/Trie structures:
>>>> https://github.com/JulesGosnell/seqspert
>>>>
>>>
>>> I feel a misunderstanding here. Akka Streams is not about data
>>> structures, it is about streams. I mean real streams, like live TCP
>>> incoming bytes. Unfortunately Java also chose the name Stream for another
>>> concept, but that is more targeted for collections. In Akka Streams on the
>>> other hand streams backed by collections is the rare case, not the common
>>> one (although many examples use collections as sources since they are easy
>>> to show).
>>>
>>>  -Endre
>>>
>>>
>>>> The work is definitely portable to the Scala collection types, as they
>>>> are based on the Clojure implementations.
>>>>
>>>>
>>>> On Tuesday, 20 January 2015 19:04:45 UTC, Sam Halliday wrote:
>>>>>
>>>>> Dear Akka Team,
>>>>>
>>>>> In response to Bjorn's prompting[1], I have read all the Akka
>>>>> Streams 1.0-M2 documentation[2].
>>>>>
>>>>> I am very impressed, and excited! There has clearly been a lot of
>>>>> thought put into this framework and I am looking forward to using
>>>>> it in anger over the coming years.
>>>>>
>>>>> Bjorn asked if I felt any examples were missing, and sadly my
>>>>> original request (that I've been going on about for years,
>>>>> sorry!) is indeed missing. It is the case of a fast producer and
>>>>> a slow consumer that is ideal for parallelisation.
>>>>>
>>>>> I believe it may be possible to use the current 1.0-M2 to address
>>>>> my bugbear by using the Actor integration to write an actor that
>>>>> has N instances behind a router, but it feels hacky to have to
>>>>> use an Actor at all. What is really missing is a Junction that
>>>>> multiplies the next Flow into N parallel parts that run on
>>>>> separate threads.
>>>>>
>>>>>
>>>>> In general, I felt that the documentation was missing any
>>>>> commentary on concurrency and parallelisation. I was left
>>>>> wondering what thread everything was happening on. Some initial
>>>>> questions I have in this area:
>>>>>
>>>>> 1. Is everything actually executed in the same thread? What about
>>>>>when you have a Junction?
>>>>>
>>>>> 2. Is it possible to be doing work to populate the Source's cache
>>>>>while work is being executed in a Flow or Sink?
>>>>>
>>>>> It would be good to have a section in the documentation that
>>>>> discusses this in more detail.
>>>>>
>>>>>
>>>>> And, very importantly, it would be good to have the feature of
>>>>> being able to split a Flow into N parallel parts! I recently
>>>>> learnt how to do this in ScalazStreams but I'm much rather be
>>>>> able to do it in Akka Streams as I find everything else about the
>>>>> architecture to be so much easier to understand (plus integration
>>>>> with Akka Actors is just tremendous).
>>>>>
>>>>> PS: I'm also very excited by Slick 3.0 which appears t

Re: [akka-user] Re: Feedback on Akka Streams 1.0-M2 Documentation

2015-01-21 Thread Sam Halliday
Err, Endre, yes. And when merging Seq[T] in a mapConcat we would all
benefit greatly from faster merging, Seq[Seq[T]] => Seq[T]
On 21 Jan 2015 08:03, "Endre Varga"  wrote:

> Hi Sam,
>
> On Tue, Jan 20, 2015 at 9:56 PM, Sam Halliday 
> wrote:
>
>> One more comment on the streams API. It is really cool that you've
>> thought about using mapConcat instead of flatMap to enable optimised merge
>> operations. I just wanted to draw your attention to a clojure project that
>> does super-fast merging of immutable Tree/Trie structures:
>> https://github.com/JulesGosnell/seqspert
>>
>
> I feel a misunderstanding here. Akka Streams is not about data structures,
> it is about streams. I mean real streams, like live TCP incoming bytes.
> Unfortunately Java also chose the name Stream for another concept, but that
> is more targeted for collections. In Akka Streams on the other hand streams
> backed by collections is the rare case, not the common one (although many
> examples use collections as sources since they are easy to show).
>
>  -Endre
>
>
>> The work is definitely portable to the Scala collection types, as they
>> are based on the Clojure implementations.
>>
>>
>> On Tuesday, 20 January 2015 19:04:45 UTC, Sam Halliday wrote:
>>>
>>> Dear Akka Team,
>>>
>>> In response to Bjorn's prompting[1], I have read all the Akka
>>> Streams 1.0-M2 documentation[2].
>>>
>>> I am very impressed, and excited! There has clearly been a lot of
>>> thought put into this framework and I am looking forward to using
>>> it in anger over the coming years.
>>>
>>> Bjorn asked if I felt any examples were missing, and sadly my
>>> original request (that I've been going on about for years,
>>> sorry!) is indeed missing. It is the case of a fast producer and
>>> a slow consumer that is ideal for parallelisation.
>>>
>>> I believe it may be possible to use the current 1.0-M2 to address
>>> my bugbear by using the Actor integration to write an actor that
>>> has N instances behind a router, but it feels hacky to have to
>>> use an Actor at all. What is really missing is a Junction that
>>> multiplies the next Flow into N parallel parts that run on
>>> separate threads.
>>>
>>>
>>> In general, I felt that the documentation was missing any
>>> commentary on concurrency and parallelisation. I was left
>>> wondering what thread everything was happening on. Some initial
>>> questions I have in this area:
>>>
>>> 1. Is everything actually executed in the same thread? What about
>>>when you have a Junction?
>>>
>>> 2. Is it possible to be doing work to populate the Source's cache
>>>while work is being executed in a Flow or Sink?
>>>
>>> It would be good to have a section in the documentation that
>>> discusses this in more detail.
>>>
>>>
>>> And, very importantly, it would be good to have the feature of
>>> being able to split a Flow into N parallel parts! I recently
>>> learnt how to do this in ScalazStreams but I'm much rather be
>>> able to do it in Akka Streams as I find everything else about the
>>> architecture to be so much easier to understand (plus integration
>>> with Akka Actors is just tremendous).
>>>
>>> PS: I'm also very excited by Slick 3.0 which appears to be aiming toward
>>> Reactive Streams and, I assume, integration with Akka Streams. e.g. produce
>>> a Source[Entity] from a SELECT with pushback on extremely large result sets.
>>>
>>>
>>> [1] https://groups.google.com/d/msg/akka-user/1TlAy-oqOk8/xvJpyVMWytsJ
>>> [2] http://doc.akka.io/docs/akka-stream-and-http-experimental/
>>> 1.0-M2/scala.html
>>> [3] https://github.com/fommil/rx-playground/blob/master/src/
>>> main/scala/com/github/fommil/rx/scratch.scala#L204
>>>
>>> Best regards,
>>> Sam (fommil)
>>>
>>>  --
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email t

[akka-user] Re: Feedback on Akka Streams 1.0-M2 Documentation

2015-01-20 Thread Sam Halliday
One more comment on the streams API. It is really cool that you've thought 
about using mapConcat instead of flatMap to enable optimised merge 
operations. I just wanted to draw your attention to a clojure project that 
does super-fast merging of immutable Tree/Trie structures:
 https://github.com/JulesGosnell/seqspert

The work is definitely portable to the Scala collection types, as they are 
based on the Clojure implementations.


On Tuesday, 20 January 2015 19:04:45 UTC, Sam Halliday wrote:
>
> Dear Akka Team,
>
> In response to Bjorn's prompting[1], I have read all the Akka
> Streams 1.0-M2 documentation[2].
>
> I am very impressed, and excited! There has clearly been a lot of
> thought put into this framework and I am looking forward to using
> it in anger over the coming years.
>
> Bjorn asked if I felt any examples were missing, and sadly my
> original request (that I've been going on about for years,
> sorry!) is indeed missing. It is the case of a fast producer and
> a slow consumer that is ideal for parallelisation.
>
> I believe it may be possible to use the current 1.0-M2 to address
> my bugbear by using the Actor integration to write an actor that
> has N instances behind a router, but it feels hacky to have to
> use an Actor at all. What is really missing is a Junction that
> multiplies the next Flow into N parallel parts that run on
> separate threads.
>
>
> In general, I felt that the documentation was missing any
> commentary on concurrency and parallelisation. I was left
> wondering what thread everything was happening on. Some initial
> questions I have in this area:
>
> 1. Is everything actually executed in the same thread? What about
>when you have a Junction?
>
> 2. Is it possible to be doing work to populate the Source's cache
>while work is being executed in a Flow or Sink?
>
> It would be good to have a section in the documentation that
> discusses this in more detail.
>
>
> And, very importantly, it would be good to have the feature of
> being able to split a Flow into N parallel parts! I recently
> learnt how to do this in ScalazStreams but I'm much rather be
> able to do it in Akka Streams as I find everything else about the
> architecture to be so much easier to understand (plus integration
> with Akka Actors is just tremendous).
>
> PS: I'm also very excited by Slick 3.0 which appears to be aiming toward 
> Reactive Streams and, I assume, integration with Akka Streams. e.g. produce 
> a Source[Entity] from a SELECT with pushback on extremely large result sets.
>
>
> [1] https://groups.google.com/d/msg/akka-user/1TlAy-oqOk8/xvJpyVMWytsJ
> [2] 
> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala.html
> [3] 
> https://github.com/fommil/rx-playground/blob/master/src/main/scala/com/github/fommil/rx/scratch.scala#L204
>
> Best regards,
> Sam (fommil)
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Feedback on Akka Streams 1.0-M2 Documentation

2015-01-20 Thread Sam Halliday
Dear Akka Team,

In response to Bjorn's prompting[1], I have read all the Akka
Streams 1.0-M2 documentation[2].

I am very impressed, and excited! There has clearly been a lot of
thought put into this framework and I am looking forward to using
it in anger over the coming years.

Bjorn asked if I felt any examples were missing, and sadly my
original request (that I've been going on about for years,
sorry!) is indeed missing. It is the case of a fast producer and
a slow consumer that is ideal for parallelisation.

I believe it may be possible to use the current 1.0-M2 to address
my bugbear by using the Actor integration to write an actor that
has N instances behind a router, but it feels hacky to have to
use an Actor at all. What is really missing is a Junction that
multiplies the next Flow into N parallel parts that run on
separate threads.


In general, I felt that the documentation was missing any
commentary on concurrency and parallelisation. I was left
wondering what thread everything was happening on. Some initial
questions I have in this area:

1. Is everything actually executed in the same thread? What about
   when you have a Junction?

2. Is it possible to be doing work to populate the Source's cache
   while work is being executed in a Flow or Sink?

It would be good to have a section in the documentation that
discusses this in more detail.


And, very importantly, it would be good to have the feature of
being able to split a Flow into N parallel parts! I recently
learnt how to do this in ScalazStreams but I'm much rather be
able to do it in Akka Streams as I find everything else about the
architecture to be so much easier to understand (plus integration
with Akka Actors is just tremendous).

PS: I'm also very excited by Slick 3.0 which appears to be aiming toward 
Reactive Streams and, I assume, integration with Akka Streams. e.g. produce 
a Source[Entity] from a SELECT with pushback on extremely large result sets.


[1] https://groups.google.com/d/msg/akka-user/1TlAy-oqOk8/xvJpyVMWytsJ
[2] 
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala.html
[3] 
https://github.com/fommil/rx-playground/blob/master/src/main/scala/com/github/fommil/rx/scratch.scala#L204

Best regards,
Sam (fommil)

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: sent/received messages excluding the deathwatch heartbeats?

2015-01-19 Thread Sam Halliday
Is this something that would be suitable for an RFE?

More generally it would be good to have an option to only log userland 
messages in remoting.

On Friday, 16 January 2015 17:41:55 UTC, Sam Halliday wrote:
>
> Hi all,
>
> I'm trying to debug a remoting issue and I need to look at the logs of the 
> messages that are sent.
>
> However, the logs are completely filled up with the send/receive of 
> deathwatch messages.
>
> Without turning off deathwatch, is there any way to exclude it from the 
> logs?
>
> Latest akka stable, logging config looks like
>
> akka {
>   remote {
> log-sent-messages = on
> log-received-messages = on
> log-frame-size-exceeding = 1000b
>
> log-remote-lifecycle-events = off
>
> watch-failure-detector {
>   # Our timeouts can be so high, the PHI model basically doesn't
>   # know what to do, so just use a single timeout. 
>   # http://doc.akka.io/docs/akka/snapshot/scala/remoting.html
>   acceptable-heartbeat-pause = 60 s
> }
>   }
>
>   actor {
> debug {
>   receive = on
>   #autoreceive = on
>   #lifecycle = on
>   #fsm = on
>   #event-stream = on
>   unhandled = on
> }
> }
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] sent/received messages excluding the deathwatch heartbeats?

2015-01-16 Thread Sam Halliday
Hi all,

I'm trying to debug a remoting issue and I need to look at the logs of the 
messages that are sent.

However, the logs are completely filled up with the send/receive of 
deathwatch messages.

Without turning off deathwatch, is there any way to exclude it from the 
logs?

Latest akka stable, logging config looks like

akka {
  remote {
log-sent-messages = on
log-received-messages = on
log-frame-size-exceeding = 1000b

log-remote-lifecycle-events = off

watch-failure-detector {
  # Our timeouts can be so high, the PHI model basically doesn't
  # know what to do, so just use a single timeout. 
  # http://doc.akka.io/docs/akka/snapshot/scala/remoting.html
  acceptable-heartbeat-pause = 60 s
}
  }

  actor {
debug {
  receive = on
  #autoreceive = on
  #lifecycle = on
  #fsm = on
  #event-stream = on
  unhandled = on
}
}

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka streams infinite data source example

2014-12-10 Thread Sam Halliday
Hi Patrick,

I'm guessing things have moved on since April when I first started this 
thread.

Does the latest release of akka streams handle my example?

Best regards,
Sam

On Wednesday, 23 April 2014 10:01:30 UTC+1, Patrik Nordwall wrote:
>
>
>
>
> On Tue, Apr 22, 2014 at 9:32 PM, Sam Halliday  > wrote:
>
>> Thanks Patrik,
>>
>> Well it sounds like everything is heading in the right direction, if
>> it is not already there yet. I'm very excited to see what will be in
>> the next milestone :-D
>>
>
> Thanks for trying it out and for your feedback.
> /Patrik
>  
>
>>
>> On 22 April 2014 12:08, Patrik Nordwall > > wrote:
>> >
>> >
>> >
>> > On Mon, Apr 21, 2014 at 3:59 PM, Sam Halliday > >
>> > wrote:
>> >>>
>> >>> On Monday, 21 April 2014 12:32:57 UTC+1, Patrik Nordwall wrote:
>> >>>>
>> >>>> I intend to read the documentation fully, but I was a little
>> >>>> disappointed that the activator examples did not have a simple 
>> example with
>> >>>> an (effectively) infinite data source that can only be polled in 
>> serial,
>> >>>> with parallel (but controllably finite) consumers.
>> >>>>
>> >>>> Isn't that demonstrated with the random number generator source, and 
>> its
>> >>>> slow consumers?
>> >>>
>> >>> I missed that one. How many consumers are there at any given moment?
>> >>>
>> >>> It has one consumer but two filter steps that can execute pipelined. 
>> You
>> >>> can attach several consumers with toProducer, and then start several 
>> flows
>> >>> from that. Backpressure works with multiple consumers also.
>> >>
>> >>
>> >>
>> >> OK great. I did actually see this example and that's not what I mean. 
>> I'd
>> >> really like to be able to specify (e.g. in runtime config files) how 
>> many
>> >> maximum threads can be running in the "filter(rnd => isPrime(rnd))" 
>> block.
>> >>
>> >> Say we want to do the filtering in parallel, using 2 cores. Imagine the
>> >> first random number that we get is really big and takes a few seconds 
>> to
>> >> check if it is prime. The second number is "3" and we instantly accept
>> >> it it would be preferable if this result were held back until the 
>> first
>> >> answer became available, but the free core still goes on to check the 
>> third
>> >> number.
>> >>
>> >> Alternatively, I can imagine situations where order does not matter at
>> >> all. This is all considered in the Observable pattern, so I should 
>> imagine
>> >> you have also included it :-)
>> >>
>> >> Does that make sense? Would it be tricky to update the primes example 
>> in
>> >> this way?
>> >
>> >
>> > This should be possible when we have all operators in place. Please 
>> create a
>> > ticket, so that it's not forgotten.
>> >
>> >>
>> >>
>> >> Adding a second flow is a very different thing. I think I'd need to 
>> read
>> >> the docs (and source code) in a lot more detail before understanding 
>> the
>> >> consequences for a particular Producer (e.g. does it replay from the 
>> start,
>> >> is it sending the same results to all flows, are all flows getting the 
>> same
>> >> order of events, etc). This is of less interest to me at the moment, 
>> but I
>> >> can see it being very important.
>> >>
>> >>
>> >>
>> >>>
>> >>> But Iterator[T] is a little too ill-defined near the end of the stream
>> >>> (that's why I created my own Producer in the RxJava playground). For
>> >>> example, does it block on hasNext or on next if it knows there are 
>> more
>> >>> elements that are not yet available, or does it close up and go home?
>> >>> Traditional Java APIs (such as Queues) would actually return early if 
>> a
>> >>> queue was exhausted, instead of waiting for a poison pill. In any 
>> case, if
>> >>> Flow can handle an Iterator that blocks (e.g. for I/O), it's probably 
>> good
>> >>> enough for most situations.
>> >>>
>> >&

[akka-user] akka and websockets

2014-12-10 Thread Sam Halliday
Hi all,

At the scalax talk on Monday, websocket support was noted to be the most 
requested feature. I actually couldn't find a ticket for it... where should 
I be `+1`ing?

e.g. for akka-io and akka-streams... but just for actor messages would also 
be great.

Best regards, Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] equivalent of scalaz-stream nondeterminism.njoin

2014-10-07 Thread Sam Halliday
Hi all,

I've asked this question in various forms over the years, and I'm pleased 
to find out that scalaz-stream actually answers it:

https://github.com/fommil/rx-playground/blob/master/src/main/scala/com/github/fommil/rx/scratch.scala#L202

Does akka-stream have an equivalent to the nondeterminism.njoin? (i.e. 
ability to apply backpressure to a producer whilst doing as much 
consumption in parallel as possible).

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] remote address of an actor

2014-04-29 Thread Sam Halliday
Hi all,

I am migrating code (that uses akka 2.0.5), which requires an ActorPath for 
any given actor that will work remotely. This is toString'ed and passed to 
a new java process (on an arbitrary remote machine) that can call home.

This code used to work:

  def remotePath(ref: ActorRef)(implicit system: ActorSystem): ActorPath =

ActorPath.fromString(ref.path.toStringWithAddress(externalAddress(system)))

  private def externalAddress(system: ActorSystem): Address =

system.asInstanceOf[ExtendedActorSystem].provider.asInstanceOf[RemoteActorRefProvider].transport.address

but with more recent versions of Akka, it doesn't work anymore because of 
visibility issues. It was always a hack anyway.

I am setting the akka.remote.netty.port and akka.remote.netty.hostname 
during configuration, so I should know what all the components of the 
ActorPath are for my own actor systems, but that seems messy because it 
means I have to construct Address explicitly and hard-wire the "protocol" 
parameter.

What is the preferred way to do this in the latest stable release of akka?

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka streams infinite data source example

2014-04-22 Thread Sam Halliday
Thanks Patrik,

Well it sounds like everything is heading in the right direction, if
it is not already there yet. I'm very excited to see what will be in
the next milestone :-D

On 22 April 2014 12:08, Patrik Nordwall  wrote:
>
>
>
> On Mon, Apr 21, 2014 at 3:59 PM, Sam Halliday 
> wrote:
>>>
>>> On Monday, 21 April 2014 12:32:57 UTC+1, Patrik Nordwall wrote:
>>>>
>>>> I intend to read the documentation fully, but I was a little
>>>> disappointed that the activator examples did not have a simple example with
>>>> an (effectively) infinite data source that can only be polled in serial,
>>>> with parallel (but controllably finite) consumers.
>>>>
>>>> Isn't that demonstrated with the random number generator source, and its
>>>> slow consumers?
>>>
>>> I missed that one. How many consumers are there at any given moment?
>>>
>>> It has one consumer but two filter steps that can execute pipelined. You
>>> can attach several consumers with toProducer, and then start several flows
>>> from that. Backpressure works with multiple consumers also.
>>
>>
>>
>> OK great. I did actually see this example and that's not what I mean. I'd
>> really like to be able to specify (e.g. in runtime config files) how many
>> maximum threads can be running in the "filter(rnd => isPrime(rnd))" block.
>>
>> Say we want to do the filtering in parallel, using 2 cores. Imagine the
>> first random number that we get is really big and takes a few seconds to
>> check if it is prime. The second number is "3" and we instantly accept
>> it it would be preferable if this result were held back until the first
>> answer became available, but the free core still goes on to check the third
>> number.
>>
>> Alternatively, I can imagine situations where order does not matter at
>> all. This is all considered in the Observable pattern, so I should imagine
>> you have also included it :-)
>>
>> Does that make sense? Would it be tricky to update the primes example in
>> this way?
>
>
> This should be possible when we have all operators in place. Please create a
> ticket, so that it's not forgotten.
>
>>
>>
>> Adding a second flow is a very different thing. I think I'd need to read
>> the docs (and source code) in a lot more detail before understanding the
>> consequences for a particular Producer (e.g. does it replay from the start,
>> is it sending the same results to all flows, are all flows getting the same
>> order of events, etc). This is of less interest to me at the moment, but I
>> can see it being very important.
>>
>>
>>
>>>
>>> But Iterator[T] is a little too ill-defined near the end of the stream
>>> (that's why I created my own Producer in the RxJava playground). For
>>> example, does it block on hasNext or on next if it knows there are more
>>> elements that are not yet available, or does it close up and go home?
>>> Traditional Java APIs (such as Queues) would actually return early if a
>>> queue was exhausted, instead of waiting for a poison pill. In any case, if
>>> Flow can handle an Iterator that blocks (e.g. for I/O), it's probably good
>>> enough for most situations.
>>>
>>> Ah, I see what you mean. Blocking hasNext/next doesn't sound attractive
>>> to me. That should probably be another Producer, that can do the polling.
>>>
>>
>>
>> Now I'm confused whether Producer is a pull or push based source... in the
>> examples, I was getting the impression that it was very much a pull based
>> API (and would therefore have to block on some level, if data is not
>> available yet). Is it also a pusher?
>
>
> Yes. It gets a request from downstream consumer of X more elements. That
> doesn't mean that it replies immediately with X elements, but it
> asynchronously pushes up to X elements whenever it has them. Exactly how it
> "waits" for the elements itself is up to the implementation of the Producer.
>
>>
>>
>> The brain scanner project is an example of a pusher source... throttling
>> it doesn't make any sense unless it is acceptable to throw results away
>> (i.e. not collect them in time). So, yes, you are absolutely correct that a
>> blocking Iterator is not good here.
>
>
> Sure, it could be implemented as an actor that periodically poll the device,
> but it must still not send more elements downstream than what was requested.
> If it can'

Re: [akka-user] akka streams infinite data source example

2014-04-21 Thread Sam Halliday

>
> On Monday, 21 April 2014 12:32:57 UTC+1, Patrik Nordwall wrote:
>>
>> I intend to read the documentation fully, but I was a little disappointed 
>> that the activator examples did not have a simple example with an 
>> (effectively) infinite data source that can only be polled in serial, with 
>> parallel (but controllably finite) consumers.
>>
>> Isn't that demonstrated with the random number generator source, and its 
>> slow consumers?
>>
> I missed that one. How many consumers are there at any given moment?
>
> It has one consumer but two filter steps that can execute pipelined. You 
> can attach several consumers with toProducer, and then start several flows 
> from that. Backpressure works with multiple consumers also.
>


OK great. I did actually see this example and that's not what I mean. I'd 
really like to be able to specify (e.g. in runtime config files) how many 
maximum threads can be running in the "filter(rnd => isPrime(rnd))" block.

Say we want to do the filtering in parallel, using 2 cores. Imagine the 
first random number that we get is really big and takes a few seconds to 
check if it is prime. The second number is "3" and we instantly accept 
it it would be preferable if this result were held back until the first 
answer became available, but the free core still goes on to check the third 
number.

Alternatively, I can imagine situations where order does not matter at all. 
This is all considered in the Observable pattern, so I should imagine you 
have also included it :-)

Does that make sense? Would it be tricky to update the primes example in 
this way?

Adding a second flow is a very different thing. I think I'd need to read 
the docs (and source code) in a lot more detail before understanding the 
consequences for a particular Producer (e.g. does it replay from the start, 
is it sending the same results to all flows, are all flows getting the same 
order of events, etc). This is of less interest to me at the moment, but I 
can see it being very important.


 

> But Iterator[T] is a little too ill-defined near the end of the stream 
> (that's why I created my own Producer in the RxJava playground). For 
> example, does it block on hasNext or on next if it knows there are more 
> elements that are not yet available, or does it close up and go home? 
> Traditional Java APIs (such as Queues) would actually return early if a 
> queue was exhausted, instead of waiting for a poison pill. In any case, if 
> Flow can handle an Iterator that blocks (e.g. for I/O), it's probably good 
> enough for most situations.
>
> Ah, I see what you mean. Blocking hasNext/next doesn't sound attractive to 
> me. That should probably be another Producer, that can do the polling. 
>
>

Now I'm confused whether Producer is a pull or push based source... in the 
examples, I was getting the impression that it was very much a pull based 
API (and would therefore have to block on some level, if data is not 
available yet). Is it also a pusher?

The brain scanner project is an example of a pusher source... throttling it 
doesn't make any sense unless it is acceptable to throw results away (i.e. 
not collect them in time). So, yes, you are absolutely correct that a 
blocking Iterator is not good here.

However, for datasources (e.g. reading from a really big query result over 
a SQL connection), the "next" or "hasNext" in an equivalent Iterator may 
very well block and there is no way to get around this. Indeed, you will 
have the same problem with Source.fromFile(...).readLines, exaggerated if 
the file is on a really slow hard drive (or a network drive).

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka streams infinite data source example

2014-04-21 Thread Sam Halliday

On Monday, 21 April 2014 12:32:57 UTC+1, Patrik Nordwall wrote:
>
> I intend to read the documentation fully, but I was a little disappointed 
> that the activator examples did not have a simple example with an 
> (effectively) infinite data source that can only be polled in serial, with 
> parallel (but controllably finite) consumers.
>
>
> Isn't that demonstrated with the random number generator source, and its 
> slow consumers?
>

I missed that one. How many consumers are there at any given moment?

Is it in here 
somewhere? 
https://github.com/typesafehub/activator-akka-stream-scala/tree/master/src/main/scala/sample/stream

My example is trying to simulate real world examples of:

* parsing loads of data coming from a single data source (e.g. indexing a 
multi-TB database with Lucene, running in under 1GB)
* parallel finite element calculations, where there are a lot more elements 
than bytes of RAM so they have to be batched (and with minimal object churn)

 

> BasicTransformation defines the input text in code (to make it simple), 
> but the iterator next() is not called more than what can be consumed 
> downstream.
>
> Isn't the log file sample more similar to your text file input? It does 
> not read the whole file (if it was large) into memory.
>

Right, so you pass an Iterator[String] to the flow. Yes, that looks good, 
sorry I missed it.

But Iterator[T] is a little too ill-defined near the end of the stream 
(that's why I created my own Producer in the RxJava playground). For 
example, does it block on hasNext or on next if it knows there are more 
elements that are not yet available, or does it close up and go home? 
Traditional Java APIs (such as Queues) would actually return early if a 
queue was exhausted, instead of waiting for a poison pill. In any case, if 
Flow can handle an Iterator that blocks (e.g. for I/O), it's probably good 
enough for most situations.

It would be even better if it knew how often to poll the data source... for 
example I have an EEG (brain scanner) library which has to poll the device 
at 57Hz. If it does it too quickly, there are inadequacies in the 
underlying hardware which result in busy spinning (yes, it's insane, and it 
really does eat the whole CPU)... but if I don't poll quickly enough then 
data can be lost. Relevant code (and my non-stream hack) 
here: 
https://github.com/fommil/emokit-java/blob/master/src/main/java/com/github/fommil/emokit/EmotivHid.java#L84

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] akka streams infinite data source example

2014-04-21 Thread Sam Halliday
Hi all,

I am very excited by akka streams -- it aims to solve a problem that I see 
time and time again. Every time I post to this list it feels like the 
solution is always "wait until Akka Streams is released...". Finally, it is 
here!

I intend to read the documentation fully, but I was a little disappointed 
that the activator examples did not have a simple example with an 
(effectively) infinite data source that can only be polled in serial, with 
parallel (but controllably finite) consumers.

Is there any chance of an example along these lines?

A month or so ago, I asked the same of the RxJava community and it turned 
out that it was a work-in-progess... so I created this little example 
comparing various approaches (I didn't write an Akka Actor implementation 
because it is quite obvious that it would just OOM):

  
https://github.com/fommil/rx-playground/blob/master/src/main/scala/com/github/fommil/rx/scratch.scala

The `ProducerObservableParser` reads in a CSV file one line at a time (the 
file is far too big to hold in memory), and then processes N rows in 
parallel, only reading more lines as the consumers finish each row. There 
is never more than a bounded number of rows in memory at any given point in 
time.

The RxJava POC Observable is here

  
https://github.com/fommil/rx-playground/blob/master/src/main/scala/com/github/fommil/rx/producers.scala


But what is the equivalent Akka Streams code? The BasicTransformation 
example reads in the whole text before "flowing" it, and I couldn't see 
anything where the consuming was happening in parallel.

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] mutually exclusive actor refs?

2014-02-24 Thread Sam Halliday
Thanks Roland,

Incidentally, is it possible to create an Actor that "intercepts" all 
messages sent to its children? It would appear to be a useful mechanism for 
implementing Patrik's suggestion.

e.g.

  MutualExclusionActor lives at /user/mutual
  with children Actors A and B

but any messages sent to /user/mutual/A actually go to the 
MutualExclusionActor and it knows that the intended destination is Actor A.


On Friday, 21 February 2014 16:16:13 UTC, rkuhn wrote:
>
> Pro tip #1: do not view the Mailbox as anything that carries semantics or 
> helps in your application-level concerns. A Mailbox is just an internal 
> implementation detail of how messages get into the Actor, and the Actor is 
> what you should think about.
>
> Pro tip #2: don’t do distributed locks (or transactions for that matter), 
> it does not scale. Coordination and synchronization are extremely costly 
> and should be considered as such; for the purposes of this tip a modern 
> multi-core CPU counts as distributed. Letting all messages flow through an 
> actor is of course a form of coordination, but modeling it as such 
> expresses only the core concern with no extra overhead—using a Lock within 
> an Actor hurts the performance of everything that happens to execute on the 
> same Dispatcher.
>
> Regards,
>
> Roland
>
> 21 feb 2014 kl. 16:57 skrev Sam Halliday 
> >:
>
> :-P it's pretty trivial. I don't actually have a need for it at the 
> moment, it is just a thought experiment. I would probably want to reuse the 
> mailbox queue ordering rules, though. That will require some jiggery pokery 
> in the config. It would be interesting to profile a high load read/write 
> Actor with and without a sorted queue, and with and without inherited queue 
> ordering in the buffer.
>
> It just erks me that it couldn't be implemented at a framework level as a 
> reusable "exclusive actors" feature, which seems like it might be useful in 
> many situations and could possibly be implemented without the need for lots 
> of messages going back and forth.
>
> On Friday, 21 February 2014 15:48:32 UTC, Patrik Nordwall wrote:
>>
>>
>>
>>
>> On Fri, Feb 21, 2014 at 4:38 PM, Sam Halliday wrote:
>>
>>> It's not difficult in the way that it has been proposed. But I would 
>>> like to be able to abstract it in a clean way that doesn't steal 
>>> responsibility (from the mailbox).
>>>
>>
>> The only responsibility of the exclusive actor would be to keep track of 
>> this.
>>  
>>
>>>
>>> Also, this approach demands that the message queue be ordered since "I'm 
>>> Finished" messages are of very high priority.
>>>
>>
>> Not needed. It is not going to block until it receives enough Finished. 
>> It receives all messages and puts everything but Finished in an internal 
>> buffer. You can even use Stash for this. Describing it in more detail would 
>> take away all the fun from you. Give it a try.
>>
>> Cheers,
>> Patrik
>>
>>  
>>
>>>
>>>
>>> On Friday, 21 February 2014 15:20:57 UTC, Alec Zorab wrote:
>>>
>>>> Surely it's just standard application logic? The parent enforces the 
>>>> exclusivity just like any other application logic. What's the difficulty?
>>>>
>>>>
>>>> On 21 February 2014 14:47, Sam Halliday  wrote:
>>>>
>>>>> Sorry, how to know when the children have finished is not the right 
>>>>> question. What I really mean is how can the exclusivity be enforced 
>>>>> without 
>>>>> introducing a lock in the delegating actor in order to allow waiting 
>>>>> until 
>>>>> the children actors are all available?
>>>>>
>>>>> To introduce a lock like this would seem to me to be something that is 
>>>>> best achieved in the framework itself... would it be difficult to do such 
>>>>> a 
>>>>> thing?
>>>>>
>>>>>
>>>>>
>>>>> On Friday, 21 February 2014 14:37:15 UTC, Sam Halliday wrote:
>>>>>>
>>>>>> On Friday, 21 February 2014 14:28:44 UTC, Patrik Nordwall wrote:
>>>>>>
>>>>>>> On Fri, Feb 21, 2014 at 2:50 PM, Sam Halliday 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi all,
>>>>>>>>
>>>>>>>> Is there any concept of mutually exc

Re: [akka-user] mutually exclusive actor refs?

2014-02-21 Thread Sam Halliday
:-P it's pretty trivial. I don't actually have a need for it at the moment, 
it is just a thought experiment. I would probably want to reuse the mailbox 
queue ordering rules, though. That will require some jiggery pokery in the 
config. It would be interesting to profile a high load read/write Actor 
with and without a sorted queue, and with and without inherited queue 
ordering in the buffer.

It just erks me that it couldn't be implemented at a framework level as a 
reusable "exclusive actors" feature, which seems like it might be useful in 
many situations and could possibly be implemented without the need for lots 
of messages going back and forth.

On Friday, 21 February 2014 15:48:32 UTC, Patrik Nordwall wrote:
>
>
>
>
> On Fri, Feb 21, 2014 at 4:38 PM, Sam Halliday 
> 
> > wrote:
>
>> It's not difficult in the way that it has been proposed. But I would like 
>> to be able to abstract it in a clean way that doesn't steal responsibility 
>> (from the mailbox).
>>
>
> The only responsibility of the exclusive actor would be to keep track of 
> this.
>  
>
>>
>> Also, this approach demands that the message queue be ordered since "I'm 
>> Finished" messages are of very high priority.
>>
>
> Not needed. It is not going to block until it receives enough Finished. It 
> receives all messages and puts everything but Finished in an internal 
> buffer. You can even use Stash for this. Describing it in more detail would 
> take away all the fun from you. Give it a try.
>
> Cheers,
> Patrik
>
>  
>
>>
>>
>> On Friday, 21 February 2014 15:20:57 UTC, Alec Zorab wrote:
>>
>>> Surely it's just standard application logic? The parent enforces the 
>>> exclusivity just like any other application logic. What's the difficulty?
>>>
>>>
>>> On 21 February 2014 14:47, Sam Halliday  wrote:
>>>
>>>> Sorry, how to know when the children have finished is not the right 
>>>> question. What I really mean is how can the exclusivity be enforced 
>>>> without 
>>>> introducing a lock in the delegating actor in order to allow waiting until 
>>>> the children actors are all available?
>>>>
>>>> To introduce a lock like this would seem to me to be something that is 
>>>> best achieved in the framework itself... would it be difficult to do such 
>>>> a 
>>>> thing?
>>>>
>>>>
>>>>
>>>> On Friday, 21 February 2014 14:37:15 UTC, Sam Halliday wrote:
>>>>>
>>>>> On Friday, 21 February 2014 14:28:44 UTC, Patrik Nordwall wrote:
>>>>>
>>>>>> On Fri, Feb 21, 2014 at 2:50 PM, Sam Halliday wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> Is there any concept of mutually exclusive actor paths?
>>>>>>>
>>>>>>
>>>>>> No
>>>>>> You could build it within the exclusive actor, if you send all 
>>>>>> messages via that actor, including replies (or signals of completion).
>>>>>>
>>>>>>
>>>>> But how would I be able to tell that my child actors were actually 
>>>>> finished processing the messages?
>>>>>
>>>>>  
>>>>>
>>>>>>
>>>>>>
>>>>>>  
>>>>>>
>>>>>  
>>>>>>> What I mean by this is that
>>>>>>>
>>>>>>>   /user/exclusive/A
>>>>>>>   /user/exclusive/B
>>>>>>>
>>>>>>> are mutually exclusive (where A may be a round robin router of many 
>>>>>>> As) between A and B.
>>>>>>>
>>>>>>> This would enable concepts such as read/write actors where e.g. A 
>>>>>>> could be the reader and B the writer, with many As allowed at once 
>>>>>>> (with no 
>>>>>>> B message processing) and when B does any processing it stops messages 
>>>>>>> being sent to the As.
>>>>>>>
>>>>>>>
>>>>>>>  -- 
>>>>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>>>>> >>>>>>>>>> Search the arc

Re: [akka-user] mutually exclusive actor refs?

2014-02-21 Thread Sam Halliday
It's not difficult in the way that it has been proposed. But I would like 
to be able to abstract it in a clean way that doesn't steal responsibility 
(from the mailbox).

Also, this approach demands that the message queue be ordered since "I'm 
Finished" messages are of very high priority.

On Friday, 21 February 2014 15:20:57 UTC, Alec Zorab wrote:
>
> Surely it's just standard application logic? The parent enforces the 
> exclusivity just like any other application logic. What's the difficulty?
>
>
> On 21 February 2014 14:47, Sam Halliday 
> > wrote:
>
>> Sorry, how to know when the children have finished is not the right 
>> question. What I really mean is how can the exclusivity be enforced without 
>> introducing a lock in the delegating actor in order to allow waiting until 
>> the children actors are all available?
>>
>> To introduce a lock like this would seem to me to be something that is 
>> best achieved in the framework itself... would it be difficult to do such a 
>> thing?
>>
>>
>>
>> On Friday, 21 February 2014 14:37:15 UTC, Sam Halliday wrote:
>>>
>>> On Friday, 21 February 2014 14:28:44 UTC, Patrik Nordwall wrote:
>>>
>>>> On Fri, Feb 21, 2014 at 2:50 PM, Sam Halliday wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> Is there any concept of mutually exclusive actor paths?
>>>>>
>>>>
>>>> No
>>>> You could build it within the exclusive actor, if you send all messages 
>>>> via that actor, including replies (or signals of completion).
>>>>
>>>>
>>> But how would I be able to tell that my child actors were actually 
>>> finished processing the messages?
>>>
>>>  
>>>
>>>>
>>>>
>>>>  
>>>>
>>>
>>>>> What I mean by this is that
>>>>>
>>>>>   /user/exclusive/A
>>>>>   /user/exclusive/B
>>>>>
>>>>> are mutually exclusive (where A may be a round robin router of many 
>>>>> As) between A and B.
>>>>>
>>>>> This would enable concepts such as read/write actors where e.g. A 
>>>>> could be the reader and B the writer, with many As allowed at once (with 
>>>>> no 
>>>>> B message processing) and when B does any processing it stops messages 
>>>>> being sent to the As.
>>>>>
>>>>>
>>>>>  -- 
>>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>>> >>>>>>>>>> Search the archives: https://groups.google.com/
>>>>> group/akka-user
>>>>> --- 
>>>>> You received this message because you are subscribed to the Google 
>>>>> Groups "Akka User List" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>> an email to akka-user+...@googlegroups.com.
>>>>> To post to this group, send email to akka...@googlegroups.com.
>>>>> Visit this group at http://groups.google.com/group/akka-user.
>>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>>
>>>>
>>>>
>>>>
>>>> -- 
>>>>
>>>> Patrik Nordwall
>>>> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
>>>> Twitter: @patriknw
>>>>
>>>>   -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: http://akka.io/faq/
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [akka-user] mutually exclusive actor refs?

2014-02-21 Thread Sam Halliday
Aah, I see what you mean. For me that seems too much like the actor is 
doing the job of the mailbox (and therefore loses potentially ordering 
rules). But what is worse: graying the lines of responsibility or a lock? 
Hmm.

Any chance something like this could be implemented as a feature in Akka? 
Would it even be possible to implement it as an akka-contrib or would it be 
too closely linked with the dispatcher/mailbox algorithms?

On Friday, 21 February 2014 15:10:38 UTC, Patrik Nordwall wrote:
>
> No locks. It would buffer incoming messages until they can be delegated.
>
> /Patrik
>
> 21 feb 2014 kl. 15:47 skrev Sam Halliday 
> >:
>
> Sorry, how to know when the children have finished is not the right 
> question. What I really mean is how can the exclusivity be enforced without 
> introducing a lock in the delegating actor in order to allow waiting until 
> the children actors are all available?
>
> To introduce a lock like this would seem to me to be something that is 
> best achieved in the framework itself... would it be difficult to do such a 
> thing?
>
>
> On Friday, 21 February 2014 14:37:15 UTC, Sam Halliday wrote:
>>
>> On Friday, 21 February 2014 14:28:44 UTC, Patrik Nordwall wrote:
>>
>>> On Fri, Feb 21, 2014 at 2:50 PM, Sam Halliday wrote:
>>>
>>>> Hi all,
>>>>
>>>> Is there any concept of mutually exclusive actor paths?
>>>>
>>>
>>> No
>>> You could build it within the exclusive actor, if you send all messages 
>>> via that actor, including replies (or signals of completion).
>>>
>>>
>> But how would I be able to tell that my child actors were actually 
>> finished processing the messages?
>>
>>  
>>
>>>
>>>
>>>  
>>>
>>
>>>> What I mean by this is that
>>>>
>>>>   /user/exclusive/A
>>>>   /user/exclusive/B
>>>>
>>>> are mutually exclusive (where A may be a round robin router of many As) 
>>>> between A and B.
>>>>
>>>> This would enable concepts such as read/write actors where e.g. A could 
>>>> be the reader and B the writer, with many As allowed at once (with no B 
>>>> message processing) and when B does any processing it stops messages being 
>>>> sent to the As.
>>>>
>>>>
>>>>  -- 
>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>> >>>>>>>>>> Search the archives: 
>>>> https://groups.google.com/group/akka-user
>>>> --- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Akka User List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to akka-user+...@googlegroups.com.
>>>> To post to this group, send email to akka...@googlegroups.com.
>>>> Visit this group at http://groups.google.com/group/akka-user.
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>
>>>
>>>
>>>
>>> -- 
>>>
>>> Patrik Nordwall
>>> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
>>> Twitter: @patriknw
>>>
>>>   -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: http://akka.io/faq/
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [akka-user] mutually exclusive actor refs?

2014-02-21 Thread Sam Halliday


On Friday, 21 February 2014 15:10:38 UTC, Patrik Nordwall wrote:
>
> No locks. It would buffer incoming messages until they can be delegated.
>
> /Patrik
>
> 21 feb 2014 kl. 15:47 skrev Sam Halliday 
> >:
>
> Sorry, how to know when the children have finished is not the right 
> question. What I really mean is how can the exclusivity be enforced without 
> introducing a lock in the delegating actor in order to allow waiting until 
> the children actors are all available?
>
> To introduce a lock like this would seem to me to be something that is 
> best achieved in the framework itself... would it be difficult to do such a 
> thing?
>
>
> On Friday, 21 February 2014 14:37:15 UTC, Sam Halliday wrote:
>>
>> On Friday, 21 February 2014 14:28:44 UTC, Patrik Nordwall wrote:
>>
>>> On Fri, Feb 21, 2014 at 2:50 PM, Sam Halliday wrote:
>>>
>>>> Hi all,
>>>>
>>>> Is there any concept of mutually exclusive actor paths?
>>>>
>>>
>>> No
>>> You could build it within the exclusive actor, if you send all messages 
>>> via that actor, including replies (or signals of completion).
>>>
>>>
>> But how would I be able to tell that my child actors were actually 
>> finished processing the messages?
>>
>>  
>>
>>>
>>>
>>>  
>>>
>>
>>>> What I mean by this is that
>>>>
>>>>   /user/exclusive/A
>>>>   /user/exclusive/B
>>>>
>>>> are mutually exclusive (where A may be a round robin router of many As) 
>>>> between A and B.
>>>>
>>>> This would enable concepts such as read/write actors where e.g. A could 
>>>> be the reader and B the writer, with many As allowed at once (with no B 
>>>> message processing) and when B does any processing it stops messages being 
>>>> sent to the As.
>>>>
>>>>
>>>>  -- 
>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>> >>>>>>>>>> Search the archives: 
>>>> https://groups.google.com/group/akka-user
>>>> --- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Akka User List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to akka-user+...@googlegroups.com.
>>>> To post to this group, send email to akka...@googlegroups.com.
>>>> Visit this group at http://groups.google.com/group/akka-user.
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>
>>>
>>>
>>>
>>> -- 
>>>
>>> Patrik Nordwall
>>> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
>>> Twitter: @patriknw
>>>
>>>   -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: http://akka.io/faq/
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [akka-user] mutually exclusive actor refs?

2014-02-21 Thread Sam Halliday
Sorry, how to know when the children have finished is not the right 
question. What I really mean is how can the exclusivity be enforced without 
introducing a lock in the delegating actor in order to allow waiting until 
the children actors are all available?

To introduce a lock like this would seem to me to be something that is best 
achieved in the framework itself... would it be difficult to do such a 
thing?


On Friday, 21 February 2014 14:37:15 UTC, Sam Halliday wrote:
>
> On Friday, 21 February 2014 14:28:44 UTC, Patrik Nordwall wrote:
>
>> On Fri, Feb 21, 2014 at 2:50 PM, Sam Halliday wrote:
>>
>>> Hi all,
>>>
>>> Is there any concept of mutually exclusive actor paths?
>>>
>>
>> No
>> You could build it within the exclusive actor, if you send all messages 
>> via that actor, including replies (or signals of completion).
>>
>>
> But how would I be able to tell that my child actors were actually 
> finished processing the messages?
>
>  
>
>>
>>
>>  
>>
>
>>> What I mean by this is that
>>>
>>>   /user/exclusive/A
>>>   /user/exclusive/B
>>>
>>> are mutually exclusive (where A may be a round robin router of many As) 
>>> between A and B.
>>>
>>> This would enable concepts such as read/write actors where e.g. A could 
>>> be the reader and B the writer, with many As allowed at once (with no B 
>>> message processing) and when B does any processing it stops messages being 
>>> sent to the As.
>>>
>>>
>>>  -- 
>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>> >>>>>>>>>> Search the archives: 
>>> https://groups.google.com/group/akka-user
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to akka-user+...@googlegroups.com.
>>> To post to this group, send email to akka...@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>
>>
>>
>> -- 
>>
>> Patrik Nordwall
>> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
>> Twitter: @patriknw
>>
>>  

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: http://akka.io/faq/
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [akka-user] mutually exclusive actor refs?

2014-02-21 Thread Sam Halliday
On Friday, 21 February 2014 14:28:44 UTC, Patrik Nordwall wrote:

> On Fri, Feb 21, 2014 at 2:50 PM, Sam Halliday 
> 
> > wrote:
>
>> Hi all,
>>
>> Is there any concept of mutually exclusive actor paths?
>>
>
> No
> You could build it within the exclusive actor, if you send all messages 
> via that actor, including replies (or signals of completion).
>
>
But how would I be able to tell that my child actors were actually finished 
processing the messages?

 

>
>
>  
>

>> What I mean by this is that
>>
>>   /user/exclusive/A
>>   /user/exclusive/B
>>
>> are mutually exclusive (where A may be a round robin router of many As) 
>> between A and B.
>>
>> This would enable concepts such as read/write actors where e.g. A could 
>> be the reader and B the writer, with many As allowed at once (with no B 
>> message processing) and when B does any processing it stops messages being 
>> sent to the As.
>>
>>
>>  -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
>
>  

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: http://akka.io/faq/
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.


[akka-user] mutually exclusive actor refs?

2014-02-21 Thread Sam Halliday
Hi all,

Is there any concept of mutually exclusive actor paths?

What I mean by this is that

  /user/exclusive/A
  /user/exclusive/B

are mutually exclusive (where A may be a round robin router of many As) 
between A and B.

This would enable concepts such as read/write actors where e.g. A could be 
the reader and B the writer, with many As allowed at once (with no B 
message processing) and when B does any processing it stops messages being 
sent to the As.


-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: http://akka.io/faq/
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.


[akka-user] unhandled messages in akka 2.0.5

2014-01-21 Thread Sam Halliday
Hi all,

I'm constrained to Scala 2.9.2 and Akka 2.0.5 on my project and I am 
confused how unhandled messages are intended to be logged.

It seems that in order to get any form of logging, the receive method must 
use the LoggingReceive wrapper. This then shows handled and unhandled 
messages in the logs thanks to akka.actor.debug.receive. This is a bit of a 
pain as I'd rather not have to use a wrapper in the code to enable logging, 
I'd expect it to be a purely configuration setting.

But even without the LoggingReceive, the unhandled messages are being sent 
to 

  context.system.eventStream.publish(UnhandledMessage(message, sender, 
self))

from the Actor's default "unhandled" body.

But this never seems to be logged? Changing akka.actor.debug.unhandled does 
nothing (maybe this was added in 2.1?) nor does changing 
akka.actor.debug.event-stream.

Given that catching unhandled messages is a critical part of the dev / test 
cycle, I'm really very keen to be able to log these messages.

I've resorted to overriding the unhandled method and using the SLF4J logger 
in there, but what is the "correct" way to do this?

Best regards,
Sam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: http://akka.io/faq/
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.