.
If you find yourself with the same issue, turn it off and see if that helps.
On Saturday, August 6, 2016 at 11:56:55 AM UTC-5, tigerfoot wrote:
>
> Hi guys,
>
> Yeah, I knew the first thing ya'll would need is a reproducer, but sadly
> its wired into a lot of heavy complexity, and.
and see if I can at least come up with some way to strip
it down enough to be practically sharable and hopefully still reproduce the
problem.
On Saturday, August 6, 2016 at 1:58:16 AM UTC-5, drewhk wrote:
>
> Hi Greg,
>
> On Sat, Aug 6, 2016 at 1:11 AM, tigerfoot <gzo...@gmail.com
I should add that when it fails it just locks the stream--nothing further
processed. No errors or other output is produced.
On Friday, August 5, 2016 at 6:11:34 PM UTC-5, tigerfoot wrote:
>
> I'm having a nasty issue I hope someone can help me with.
>
> I have some stre
I'm having a nasty issue I hope someone can help me with.
I have some stream code like this:
val contentAssembly = Flow[CRec].map { crec =>
println("HERE!")
val x =
expression.render(crec.value).asInstanceOf[Message[OutputWrapper]]
println("X is "+x)
(crec,
I'll open a bug. Thanks for the tip about the async.
I'm using a flow here because its a model for the much more complex flow
I'll be using in my actual project.
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>>
Thanks for this tip--I'll check these out! I know my timing methodology is
brain-damaged, but I am applying it consistently so I can still get some
rough comparative scope.
My current clue is the pre-M1 code's manual commit. Kafka is apparently
religious (and insistent) on maintaining
Using Streams 2.4.2. Streams isn't the problem. I can do a "hello world"
stream w/o reactive-kafka and get 6-figure results. It's fast!
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
Hello,
I'm using a simple consumer pulled right out of the examples in the github
repo:
count = 0
val graph = GraphDSL.create(Consumer[Array[Byte], String](provider)) {
implicit b => kafka =>
import GraphDSL.Implicits._
type In = ConsumerRecord[Array[Byte], String]
This is an Akka streams example but I have a simple project here:
https://github.com/gzoller/LateRabbit
In addition to providing a RabbitMQ source for streams, this project allows
you to late-acknowledge messages you pull off the queue, which is a nice
feature for streams.
On Thursday,
>
> Endre,
>
Thanks for this great explanation--makes complete sense. It's really
exciting to see this generation of Akka functionality come together!
Greg
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>>
Hello,
I'm running some simple HTTP performance measurements. I've created a
trivial "ping" endpoint (blindly returns "pong") in Akka HTTP and hit it
like this:
val now = System.currentTimeMillis()
(1 to 500).foreach(i => {
HttpUtil.send(HttpRequest(uri =
Ah! I get it. (Any sleeps are just for testing to exaggerate timing
behavior so I can see how things will work no sleep in the final. ;-) )
For anyone reading this thread in the future, here's what finally worked
for me:
case class Mixer(coll:scala.collection.mutable.ListBuffer[Int])
I am trying to write a flow that accumulates a collection of data (Int in
this example) and emits a List[Int] when either 1) the collection reaches a
certain size or 2) a timer tick is received.
I've implemented the code below. I have a println in the process() method
to know what's going on.
Hello,
I need a way to toggle a flow on/off (i.e. pause a flow). I created this
simple thing in the DLS:
val throttle = Flow[Int].map(i => {while(!toggle) Thread.sleep(100); i})
This works as I need it when I use it in a FlowGraph, but I hate the sleep.
Before I go off the tracks with
Hello,
I have a "switch" code shown below in Akka stream 1.0. I'll be totally
honest, I don't understand 100% of what's going on here--I reworked it from
a sample found someplace. Is there an example somewhere of what an
equivalent structure would look like in 2.0-M1?
Thanks for any hints!
One other thought... Any idea when Akka Stream/HTTP might be released to
on Akka 2.4?
The one "gotta-have" feature for me in 2.4 is bind-hostname/bind-port,
which I need for getting my Akka app working in Docker properly. Works
great!
But my app is also stream-based. :-~
So I can probably
On Friday, October 23, 2015 at 6:42:15 PM UTC-5, tigerfoot wrote:
>
> I'm getting some "ambiguous reference" errors in my code relative to two
> calling signatures for akka.pattern.ask.
> My project imports Akka 2.4.0 only.
>
> Upon research Akka 2.3.x uses one of the sig
Hello,
I'm trying to run a test of a stream from RabbitMQ like this:
val z = RabbitSource[String](
rc,
LateQueue(outQ, durable = true, exclusive = false, autoDelete = false),
1)
.map( _.ack )
.runWith(TestSink.probe[String])
.expectSubscription()
println(z)
I want to pull things off the queue
I'm getting some "ambiguous reference" errors in my code relative to two
calling signatures for akka.pattern.ask.
My project imports Akka 2.4.0 only.
Upon research Akka 2.3.x uses one of the signatures and 2.4.0 uses a new
signature, so I'm guessing that the 2.3.x Akka is coming in as a
Thanks, Endre!
I'm referring to the outside case, and thanks for the issue link...I'll watch
it.
If I want to shut down a server node that's running (for example) a FlowGraph,
it'd be great to be able to send somethng a message that causes the Source to
stop pulling new input and then a
I have a stream (FlowGraph) that I want to gracefully stop, meaning I want
to stop accepting new input from the Source, wait until I'm sure everything
going through the pipe has either reached the Sink or other otherwise
disposed (i.e. Error), then shut everything down.
Is there a
Hello,
I've got the following code that produces the error: too many arguments for
method formFields: (pdm:
akka.http.scaladsl.server.directives.FormFieldDirectives.FieldMagnet)pdm.Out
object Json {
val sj = ScalaJack()
val vc = VisitorContext(hintMap = Map("default" -> "type"))
def
Paul,
Try looking at my repo here: https://github.com/gzoller/docker-exp
Check out the cluster branch. It shows how to use the new dual-binding
features in Akka 2.4 to get it working w/Docker.
I my example you can either pass in host/port info for Akka or it has some
facilities to infer this
Hello,
I'm trying to write a custom Unmarshaller for my Scalajack json parser. So
far it looks like this:
trait ScalajackSupport {
val sj = ScalaJack()
implicit def sjJsUnmarshaller[T](implicit mat:
Materializer,tt:TypeTag[T]): FromEntityUnmarshaller[T] = {
>
> I just noticed something else strange... If I print out the typetag (tt),
> I see this: TT: TypeTag[akka.http.scaladsl.common.StrictForm]
Why isn't that a typetag for my given type Friend in the call
'friend.as[Friend]?
--
>> Read the docs: http://akka.io/docs/
Would this work? In the code below would I get one channel/thread? Is
there one route instance per thread under the covers of Akka HTTP?
val connection = ... // create connection out here
val routes = {
val channel = connection.createChannel()
path("v1" / "hello") {
get {
Hello...
I'm reading through the RabbitMQ docs and it says they recommend having one
channel per thread to avoid concurrency issues.
My question is: Where in the the Scala DSL for akka-http would I create a
Channel object that would be accessible inside the routing directives?
I suppose I
.,
when the REST API is constantly hit by several processes and each process
triggers a message to some other node.
best
Carsten
Am Montag, 10. August 2015 22:00:18 UTC+2 schrieb tigerfoot:
Hello,
I'm trying to understand the best way to interact with a cluster. For
the sake
Hello,
Are there any methods or tools that help detect cluster fragmentation?
Once fragmentation is detected what's the best practice to correct it?
Thanks,
Greg
--
Read the docs: http://akka.io/docs/
Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
Actually that's not quite right...its not nearly that simple. These
oceans like Mesos need to auto-assign the ports so while, yes, 2.4 gives
dual binding capabilities this is only useful when you know the ports!
So how do you do that? That's much more complex.
I've got a hello-world remote
Just a quick word of enthusiastic support for any enhancements to upcoming
Akka releases that support Docker/container-friendly features!
I see that's the focus of 'Rollins', which is great news.
Docker Akka seem like a great potential combination, and it is--with a
lot of work, trial and
Hello,
I had a working demo of Akka remoting working in a Docker container. I ran
my server in Docker and was able to communicate with it from an external
program. My application.conf looked like this:
akka {
loglevel = ERROR
stdout-loglevel = ERROR
loggers = [akka.event.slf4j.Slf4jLogger]
Any updates on SSL on the client? It wasn't clear to me from the tickets.
I'm having the same issue.
Thanks!
Greg
On Friday, November 7, 2014 at 9:56:16 AM UTC-6, Martynas Mickevičius wrote:
Hello Zirou,
I am afraid this is still not possible also with the just released 0.10.
You can
2.3.x. (if not, then this is a bug)
-Endre
On Wed, Jan 7, 2015 at 4:40 AM, tigerfoot gzo...@gmail.com javascript:
wrote:
Hello,
I've got this code:
def httpGet( uri:String )(implicit s:ActorSystem) = {
implicit val materializer = FlowMaterializer()
var r:HttpResponse = null
val req
Hello,
I've got this code:
def httpGet( uri:String )(implicit s:ActorSystem) = {
implicit val materializer = FlowMaterializer()
var r:HttpResponse = null
val req = HttpRequest(HttpMethods.GET, Uri(uri))
val host:String = req.uri.authority.host.toString
val port:Int = req.uri.effectivePort
val
2.4 release!
On Monday, December 29, 2014 2:03:33 PM UTC-6, tigerfoot wrote:
Hello,
I'm trying to figure out how to get 2.4-SNAPSHOT going with Docker such
that 1) I can access akka+http outside the running Docker container (i.e.
from the host), and 2) I can enable clustering/discovery.
I
Hello,
I'm trying to figure out how to get 2.4-SNAPSHOT going with Docker such
that 1) I can access akka+http outside the running Docker container (i.e.
from the host), and 2) I can enable clustering/discovery.
I see from discussion threads and docs that the new bind-hostname and
bind-port
Thanks for the clarification... the new API is very different. I need to
let this whole flows thing sink in a bit, but I kinda see where its going.
Granted this is just one use case but I am a little concerned about all
the machinery needed to perform something pretty simple, a GET
Interested to hear from anyone who has compared udp vs tcp for netty with akka
clustering. What trade-offs did you see? Realizing that networks are
certainly different, did you see any issues with nodes falling out of cluster
with udp? Any particular benefits?
--
Read the docs:
Hello,
I have a system that does the following: read/parse from a huge (150+GB)
file of data, run sanity/cleaning operations, and store the cleaned records
in my DB.
I've got this configured using actors such that each time I read an object
from the input file I send a message to the cleaner
, tigerfoot wrote:
Hello,
I have a system that does the following: read/parse from a huge
(150+GB) file of data, run sanity/cleaning operations, and store the
cleaned records in my DB.
I've got this configured using actors such that each time I read an object
from the input file I send
Precisely what I needed. Worked great. Thank you!
--
Read the docs: http://akka.io/docs/
Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are
Hello,
I have a remote actor that does work in its receive block that can return a
future. For example:
class FooActor() extends Actor {
def receive = {
case inMsg:Msg = sender ! doSomeWork(inMsg) // doSomeWork returns
Future[Seq[String]]
}
}
Would this work remotely? I tried using it
Hello,
I have a job that's scheduled with Scheduler and I save the Cancellable.
The job isn't important, but it just sends a message to an Actor.
After scheduling, it runs for a while and at some point I call cancel() on
the Cancellable and need to ensure the job is actually done and won't
How expensive is creating an ActorSelection? Is it something I should
create and cache, or are they cheap enough I can create them as-needed?
--
Read the docs: http://akka.io/docs/
Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
Search the archives:
Hello,
I'm using sbt-multi-jvm for testing (very cool!) with scalatest. It's
working generally great, but sometimes I get the following error:
[JVM-1] *** RUN ABORTED ***
[JVM-1] akka.pattern.AskTimeoutException: Ask timed out on
[Actor[akka://test/user/IO-HTTP#-2027621091]] after [1000 ms]
Hello,
I'm using Slf4j (logback) and always get the following output on startup:
[test-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger -
Slf4jLogger started
In my akka config I have these set:
akka.loglevel = ERROR
akka.stdout-loglevel = ERROR
In my logback.xml file I have
Hello,
I'm having some trouble getting 2 Akka nodes to communicate, so I'm trying
to see my URI info.
My receiver node's config:
akka {
log-dead-letters-during-shutdown = off
loglevel = ERROR
stdout-loglevel = ERROR
loggers = [akka.event.slf4j.Slf4jLogger]
actor {
provider =
Can a node be configured to belong to multiple (2) Akka clusters at the
same time?
I want to have one cluster of private back-channel communication and a
second (possibly with different/overlapping membership) for
public/application-level communication.
--
Read the docs:
Hello,
We're running a distributed app on AWS using an Akka cluster v2.2.3.
A typical config for one of the nodes is below:
akka {
loglevel = WARNING
stdout-loglevel = WARNING
loggers = [akka.event.slf4j.Slf4jLogger]
actor {
provider = akka.cluster.ClusterActorRefProvider
}
cluster {
seed-nodes
Hello,
Are there any limitations on passing value classes in Akka?
I have this class:
case class ServerStats (
httpUrl : String,
instanceName : String,
envName : String,
version : String,
configHits : Long,
upSince : PosixDate,
// events :
need to investigate your serializer.
On Thu, Feb 13, 2014 at 9:00 PM, tigerfoot gzo...@gmail.com javascript:
wrote:
Hello,
Are there any limitations on passing value classes in Akka?
I have this class:
case class ServerStats (
httpUrl : String,
instanceName : String
That was it! Thanks!
On Thursday, January 30, 2014 4:11:14 PM UTC-6, tigerfoot wrote:
Hello,
I'm getting the exception below from 2.3.0-RC1, triggered by this code:
val cfg = ConfigFactory.parseString(s
akka.remote.netty.tcp.port = $port
spray {
can {
server
Hello,
I'm getting the exception below from 2.3.0-RC1, triggered by this code:
val cfg = ConfigFactory.parseString(s
akka.remote.netty.tcp.port = $port
spray {
can {
server {
request-timeout = 120s
}
client {
request-timeout = 120s
}
}
54 matches
Mail list logo