Re: [akka-user] Serial processing of messages by actor with Futures

2018-01-08 Thread Daniel Stoner
Akka Persistence may very well provide you some helpful ideas and insights. 
>From a pure perspective - it sounds as though you very much do want things 
to be blocking and serial. Are you aware that you can specify the execution 
context that actors run in and can in fact limit these so that many threads 
do not get used?
https://doc.akka.io/docs/akka/current/dispatchers.html
And:
https://doc.akka.io/docs/akka/current/futures.html

The above urls may give you some quick ways to achieve what you want.

I won't suggest it sounds like a great idea to have such a blocking 
situation and it could be that rethinking the way you do something else 
could remove the need for you to have such a piece in your application, but 
I certainly won't preclude the possibility that this pattern is required.

Thanks,
Dan

On Friday, 5 January 2018 09:46:21 UTC, Martin Major wrote:
>
> Thank you, I don't have much experience with Akka Persistence, but I'll 
> definitely give it a shot. I always used Akka streams for true streaming 
> things like HTTP stream of Kafka client but it is interesting idea to use 
> it for things like this. I will sure try that.
>
> Thank you very much!
> Martin
>
>
> Dne čtvrtek 4. ledna 2018 15:31:12 UTC+1 Brian Maso napsal(a):
>>
>> I would consider Akka Persistence or using Akka Streams with the 
>> Flow.mapAsync function. Either one provides a way to handle your situation 
>> without unlimited threads. Using raw Actors only it will be just a lot of 
>> work and cleverness and wheel re-invention.
>>
>> Brian Maso
>>
>> On Thu, Jan 4, 2018 at 2:30 AM, Martin Major  wrote:
>>
>>> Hello,
>>>
>>> I have application in Akka Typed where I have instance of StateActor for 
>>> each id (around 1000 instances). Each StateActor accepts 2 messages: 
>>> GetState() and SetState(state).
>>>
>>> SetState saves its state to db and if that was successful, saves a copy 
>>> to local actor cache.
>>> GetState responds with state from local cache if it is already loaded or 
>>> loads the state from database, stores it locally and responds to caller.
>>>
>>> My DB api offers asynchronous access:
>>>
>>> val stateFuture: Future[State] = db.load(id)
>>> val storeFuture: Future[Boolean] = db.save(id, state) // boolean whether 
>>> store was successful
>>>
>>> My problem is that I need to ensure that messages within one actor will 
>>> be processed in serial. Thus GetState() can answer only after Future in 
>>> previous SetState() is completed.
>>>
>>> Easy solution is to use Await.result() to each Future and change 
>>> asynchronous code to blocking. This has big disadvantage that it would use 
>>> many threads (up to number of actors).
>>>
>>> Another solution is to stash messages that comes while I'm doing 
>>> asynchronous calls. When asynchronous call ends I'll first process messages 
>>> from my stash. Disadvantage of this solution is that I'd lose mailbox on 
>>> actor restart.
>>>
>>> Last solution I came up with is to create chained list of Futures that 
>>> will do all the work. So every incoming message is simply append to chain 
>>> and when everything is ready it is processed. Disadvantage of this solution 
>>> is that I'm escaping from actor world to Futures world. And in case of 
>>> restart or termination of actor chain of Futures can still run.
>>>
>>> It would be great if I have opportunity to signal from actor whether I 
>>> want to process next message or not but IMHO it is not possible.
>>>
>>> What is the recommended way how to achieve serial processing of messages 
>>> when the code contains Futures?
>>>
>>> Thank you very much,
>>> Martin
>>>
>>> -- 
>>> >> Read the docs: http://akka.io/docs/
>>> >> Check the FAQ: 
>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>> >> Search the archives: 
>>> https://groups.google.com/group/akka-user
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to akka-user+...@googlegroups.com.
>>> To post to this group, send email to akka...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited and Fabled 
is a trading name of Marie Claire Beauty Limited, both members of the Ocado 
Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 

Re: [akka-user] Akka persistence in production - recommended journals?

2017-12-11 Thread Daniel Stoner
FYI There seems to be community (free) split brain resolvers too!

Checkout this blog I spotted in the Lagom community (which runs Akka behind
the scenes - and this blog in particular focuses on)
https://medium.com/stashaway-engineering/running-a-lagom-microservice-on-akka-cluster-with-split-brain-resolver-2a1c301659bd

It covers usage of:
https://github.com/mbilski/akka-reasonable-downing

Which on the face of it seems a relatively sensible and simple approach if
you have a static sized cluster.

On 5 December 2017 at 19:43, Dobrin Ivanov <dobrin.iva...@gmail.com> wrote:

> Hi Daniel,
>
> Thanks for sharing this information, lots of interesting stuff!
> Seems that I need to dig more .. as usual :)
>
> Just saw this presentation https://www.youtube.com/watch?v=qfchx7y6c3c
> and seems that there is a payed "Static Quorum" split brain strategy that
> looks very similar to your suggestion "min number of clustered instances in
> application.conf == your cluster size - but this only works if you have
> statically sized clusters."
>
>
>
> On Thursday, November 30, 2017 at 3:52:57 PM UTC+2, Daniel Stoner wrote:
>>
>> Hi Dobrin,
>>
>> Cluster Sharding can indeed do most of the things you suggest - and we
>> use it for a much neater fit than Cluster Singletons. ClusterSingletons
>> should be for situations where you want a single 'DatabaseAccessActor' for
>> a whole application. ClusterSharding is better when you need lots of
>> 'ProductActors' but ensuring you only have 1 instance of ProductActor per
>> given Product. You do this via the 'shardId' - and on a single cluster you
>> can guarantee only a single Actor exists for a given shardId.
>>
>> In terms of whether your 'shards' will move around the cluster sensibly -
>> I couldn't comment factually. I've not personally seen a shard move around
>> - for instance get re-distributed in the situation more nodes are started
>> in your cluster - except if an actor fails and is restarted somewhere else.
>> It is something we will have to investigate one day - and I am pretty sure
>> I saw a very old research paper about getting Akka to do it - but no idea
>> if this became reality. If all else fails just put in some deterministic
>> arbitrary suicide in your actors (Once every 10k events) and redistribution
>> will happen naturally and at predictable moments anyway :)
>>
>> I would however draw attention to your mention of:
>> "even during split brain"
>>
>> As you have to be aware that your perception of 'I have 1 cluster and it
>> has a split brain' differs massively from a node's perspective on one half
>> of that brain which is 'I'm in a seemingly small cluster'.
>> It is absolutely possible during a split brain that is poorly handled -
>> that both sides of the split believe themselves to be fully functioning
>> clusters and your singletons get duplicated. This is what I refer to in
>> terms of finding out your persistence got ruined and you can no longer
>> deploy to new nodes (Akka persists cluster sharding information in an event
>> sourcing style constantly saying 'ShardX live on nodeY' and the likes - On
>> startup of a node this information is read so as to understand if another
>> node already owns a shardId or not and build the routing table for 'should
>> a request come in for ShardX I need to route to nodeY').
>>
>> Lots of really easy ways to fix it like setting your min number of
>> clustered instances in application.conf == your cluster size - but this
>> only works if you have statically sized clusters.
>>
>> Having implemented dynamically sizing clusters in AWS we went through
>> countless strategies for split brain awareness.
>>
>> What we ended up settling on and having robust characteristics with was:
>> 'If the cluster splits - and your not with the oldest node by uptime
>> [which everyone can at least agree on] - then you should commit suicide'
>>
>> The implementation of such was far far from trivial however and we were
>> fortunate that AWS was able to inform us about what nodes should exist in
>> the cluster even if nodes cannot communicate with each other (Albeit with a
>> ~20second lag on this information - which is why we had to rely on 'oldest
>> node' and not 'newest node').
>>
>> Lightbend provided a number of Split Brain awareness/handling solutions
>> for paid for subscribers which my team use nowadays - I haven't heard any
>> complaints but not being close enough to when that got changed I couldn't
>> tell you which of the solutions from Lightbend they used.
>>
>> On 30 November 2017 at 13:2

Re: [akka-user] Akka persistence in production - recommended journals?

2017-11-30 Thread Daniel Stoner
t Akka in the past and if I remember correctly the
> only way to get "actor per Aggregate per Akka cluster" was to use cluster
> singletons which is not a solution. Now as I'm reading the docs it seems
> that cluster sharding improved and this is supported, right?
>
> https://doc.akka.io/docs/akka/snapshot/cluster-sharding.html
> >It could for example be actors representing Aggregate Roots in
> Domain-Driven Design terminology. Here we call these actors “entities”.
> These actors typically have persistent (durable) state, but this feature is
> not limited to actors with persistent state.
> >
> >Cluster sharding is typically used when you have many stateful actors
> that together consume more resources (e.g. memory) than fit on one machine.
> If you only have a few stateful actors it might be easier to run them on a
> Cluster Singleton node.
>
> By supported I mean that Aggregates can now be spread across the cluster
> (and not sitting on the oldest node like when using cluster singletons) and
> also you cannot get the same Aggregate/actor on more then one cluster node
> at a time even during split brain or any other kind of failures, right?
>
>
>
>
> On Thursday, November 30, 2017 at 12:00:19 PM UTC+2, Daniel Stoner wrote:
>>
>> I would second Konrads suggestion of going with the Cassandra version. I
>> have had great experiences with that, and the AWS DynamoDB community
>> version (Albeit it lacked a lot of features and we ended up porting our own
>> version and bug fixing/extending it. Surprisingly this ended up being a lot
>> easier than I was expecting and we began by rethinking the testing on the
>> library first to verify our understanding of its behaviour).
>>
>> The real question is - Have you tried using Akka persistence in your test
>> environment? Or even locally on your own PC? (It's easy to spin up
>> Cassandra on your PC from my memory of doing it).
>>
>> If your considering going towards Event Sourcing then be aware that it
>> can be tricky, long term in production (Having used it for 3 years) event
>> adaptation and migration becomes mandatory to understand and integral. You
>> really wish you'd knew about it on day one when you designed your persisted
>> events.
>> If you run a cluster topology then transient errors in your cluster which
>> for all intents and purposes may even go unnoticed - can mess with your
>> stored events and cause critical failure at the point you try to restart
>> the application - several days after that network blip manifested!
>>
>> Will you choose Protobuf for your database entities and then have
>> problems interrogating your production data if your system fails to start
>> up? Will you write your own serializer/use something like JSON so that you
>> can debug this one day?
>>
>> There are many topics to understand before you should be making the call
>> on which one of these journals to use in production and I suggest you try
>> out running a cluster somewhere with persistence implemented and a
>> continuous integration pipeline that deploys on every commit without
>> downtimes. See if you can support keeping it alive and have the necessary
>> practices in place to remedy issues with it. Load test it heavily with
>> tools such as Gatling or Apache JMeter - nothing causes nodes to drop out
>> of the cluster quite like huge stop the world garbage collections when you
>> run at scale!
>>
>> I don't mean at all to scare you off - writing applications which scale
>> well horizontally and utilize advanced patterns such as Event Sourcing (And
>> receive all the benefits which that comes with!) is hugely rewarding and
>> gives your software a definite edge over other architectures. But do try to
>> make sure that you understand what the cost is of having them and the extra
>> architectural, maintenance and support implications it may have on your
>> product.
>>
>> Perhaps have a read of: https://martinfowler.com/e
>> aaDev/EventSourcing.html If your thinking of heading in that direction -
>> then try to understand how it is Akka designed great solutions for each of
>> the problems that this architecture poses.
>>
>> On Wednesday, 29 November 2017 16:22:39 UTC, Konrad Malawski wrote:
>>>
>>> LevelDB is not, in any case, intended for production systems.
>>>
>>> Production ready journals would be:
>>> - the cassandra one, maintained by the akka team, it is most mature and
>>> has most features
>>> - the jdbc one, community maintained but seems to work well
>>> - we’ve heard of people using the mongo one, but I 

Re: [akka-user] Akka persistence in production - recommended journals?

2017-11-30 Thread Daniel Stoner
I would second Konrads suggestion of going with the Cassandra version. I 
have had great experiences with that, and the AWS DynamoDB community 
version (Albeit it lacked a lot of features and we ended up porting our own 
version and bug fixing/extending it. Surprisingly this ended up being a lot 
easier than I was expecting and we began by rethinking the testing on the 
library first to verify our understanding of its behaviour).

The real question is - Have you tried using Akka persistence in your test 
environment? Or even locally on your own PC? (It's easy to spin up 
Cassandra on your PC from my memory of doing it).

If your considering going towards Event Sourcing then be aware that it can 
be tricky, long term in production (Having used it for 3 years) event 
adaptation and migration becomes mandatory to understand and integral. You 
really wish you'd knew about it on day one when you designed your persisted 
events.
If you run a cluster topology then transient errors in your cluster which 
for all intents and purposes may even go unnoticed - can mess with your 
stored events and cause critical failure at the point you try to restart 
the application - several days after that network blip manifested!

Will you choose Protobuf for your database entities and then have problems 
interrogating your production data if your system fails to start up? Will 
you write your own serializer/use something like JSON so that you can debug 
this one day?

There are many topics to understand before you should be making the call on 
which one of these journals to use in production and I suggest you try out 
running a cluster somewhere with persistence implemented and a continuous 
integration pipeline that deploys on every commit without downtimes. See if 
you can support keeping it alive and have the necessary practices in place 
to remedy issues with it. Load test it heavily with tools such as Gatling 
or Apache JMeter - nothing causes nodes to drop out of the cluster quite 
like huge stop the world garbage collections when you run at scale!

I don't mean at all to scare you off - writing applications which scale 
well horizontally and utilize advanced patterns such as Event Sourcing (And 
receive all the benefits which that comes with!) is hugely rewarding and 
gives your software a definite edge over other architectures. But do try to 
make sure that you understand what the cost is of having them and the extra 
architectural, maintenance and support implications it may have on your 
product.

Perhaps have a read of: https://martinfowler.com/eaaDev/EventSourcing.html 
If your thinking of heading in that direction - then try to understand how 
it is Akka designed great solutions for each of the problems that this 
architecture poses.

On Wednesday, 29 November 2017 16:22:39 UTC, Konrad Malawski wrote:
>
> LevelDB is not, in any case, intended for production systems.
>
> Production ready journals would be:
> - the cassandra one, maintained by the akka team, it is most mature and 
> has most features
> - the jdbc one, community maintained but seems to work well
> - we’ve heard of people using the mongo one, but I can’t say if that’s a 
> good idea, likely not?
>
> Happy hakking
>
> -- 
> Cheers,
> Konrad 'ktoso ' Malawski
> Akka  @ Lightbend 
>
> On November 30, 2017 at 1:13:58, Dobrin Ivanov (dobrin...@gmail.com 
> ) wrote:
>
> Hi,
>
> I'm trying to investigate whether to use akka persistence in production 
> using java.
>
> I've been looking at 
> https://doc.akka.io/docs/akka/snapshot/persistence.html
>
> And also tried this example : 
> https://github.com/akka/akka-samples/tree/2.5/akka-sample-persistence-java
>
> The article refers to (native LevelDB) 
> https://github.com/fusesource/leveldbjni while the example refers to 
> (some java LevelDB port)  https://github.com/dain/leveldb ... not sure 
> why?
>
> Simply changing to use the native one in the example does not work: 
> java.lang.NoClassDefFoundError: org/iq80/leveldb/impl/Iq80DBFactory
>
>
>
>
> Q1: So if I do not need a cluster/failover/replication (or i my want to do 
> it myself for example) then can I use LevelDB in production? And if yes - I 
> guess it should be the native one?
>
>
> Q2: Then the article recommends replicated journals. So does anybody use 
> akka-persistence-jdbc OR okumin/akka-persistence-sql-async in production 
> for example?
> Are there any recommendations?
> I guess they can be used in java too and not only in scala but please 
> correct me if I'm wrong.
>
> Thanks!
> (inexperienced akka user)
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails 

[akka-user] Re: Has akka-http has abandoned per request actors in favor an anti-pattern DSL?

2017-04-04 Thread Daniel Stoner
Hi Kraythe,

Perhaps it helps to see a real world example that we've been working on - 
with a good number of routes involved.

This is from our AkkaHttpServer class. It's job is to inject all the routes 
(ordersV2, searchv3, searchTerms, persistence) which consist of around 6 
actual endpoints per injected class - into the right point in the hierarchy 
(Below the oAuth2 authenticator and any request/response loggers and 
whatnot that you may need).

We define index.html and healthcheck route in this class since they are one 
liners that live above oAuth2 security otherwise we would also inject them 
independently.

Route indexRoute = get(() -> route(pathSingleSlash(() -> 
getFromResource("web/index.html";
Route healthCheck = get(() -> path(PATH_HEALTH_CHECK, () -> 
extractRequestContext(healthCheckHandler::handle)));

Route apis = route(
indexRoute,
healthCheck,
oauth2Authentication(
accessTokenVerifier,
route(
ordersV2,
searchV3,
searchTerms,
persistence
)
)
);

return logRequestResult(
this::requestMethodAsInfo,
this::rejectionsAsInfo,
() -> handleExceptions(
exceptionHandlerLogAndReturnInternalError(),
() -> handleRejections(
rejectionHandlerLogAndReturnNotFound(),
() -> apis
)
)
);

Note the *handleExceptions *and *handleRejections* methods. Basically if 
your in the depths of a route (the bit supposed to handle the request) you 
have 3 options.

1) Handle the request and reply with a HttpResponse.
return HttpResponse.create().withStatus(StatusCodes.CREATED).addHeader(
RawHeader.create(HttpHeaders.LOCATION, location)
)
2) Reject the request with an explicit Rejection
return reject(Rejections.authorizationFailed());
3) Throw an exception directly
throw new MyCustomException("Something went dodgy here!")

Now how you transpose those rejections or exceptions into a HttpResponse is 
up to your generic rejection or exception handler right at the very top of 
your routes. I've included our 'ErrorHandlingDirectives' class which we 
simply extend (Instead of extending AllDirectives) wherever we need this:

public class ErrorHandlingDirectives extends AllDirectives {

private static final Logger LOG = 
LoggerFactory.getLogger(ErrorHandlingDirectives.class);

public LogEntry requestMethodAsInfo(HttpRequest request, HttpResponse 
response) {
String headers = toCensoredHeaderJson(request.getHeaders());

return LogEntry.create(
"Server has received a request\n"
+ request.method().name() + " " + 
request.getUri().toString() + "\n"
+ headers + "\n"
+ "Server responded with a response\n"
+ response.status() + "\n"
+ "Content-Type: " + 
response.entity().getContentType().toString() + "\n"
+ "Content-Length: " + 
response.entity().getContentLengthOption().orElse(-1),
InfoLevel());
}

public LogEntry rejectionsAsInfo(HttpRequest request, List 
rejections) {
String headers = toCensoredHeaderJson(request.getHeaders());

return LogEntry.create(
"Server has received a request\n"
+ request.method().name() + " " + 
request.getUri().toString() + "\n"
+ headers + "\n"
+ "Server responded with a rejection\n"
+ 
rejections.stream().map(Rejection::toString).collect(Collectors.joining("\n")),
InfoLevel());
}

public ExceptionHandler exceptionHandlerLogAndReturnInternalError() {
return ExceptionHandler
.newBuilder()
.matchAny(throwable -> extractRequest(request -> {
LOG.warn("Error on route: " + request.method().value() 
+ " " + request.getUri().toString() + " " + throwable.getMessage(), 
throwable);
return complete(StatusCodes.INTERNAL_SERVER_ERROR);
})
).build();
}

public String toCensoredHeaderJson(Iterable headers) {
return StreamSupport
.stream(headers.spliterator(), false)
.map(header -> {
if (header instanceof Authorization) {
return header.name() + ": CENSORED";
}
return header.name() + ": " + header.value();
})
.collect(Collectors.joining("\n"));
}

}


So - yes you 'can' write 1 big file with a bazillion routes in it - or you 
can do what 

[akka-user] Re: Akka HTTP path matching fundamentally broken? Surely not?

2017-03-17 Thread Daniel Stoner
I guess you have to remember that Akka-HTTP originates from Spray - and so 
those choices were likely already made. (I'm sure there is a fully 
plausible performance threading reason that is beyond me too hehe).

Well I know how i'd do nesting in Java at least if its any help.

Implementing a custom directive is easy! And the requestContext which is 
passed into every layer of your route (presumably its an implicit value in 
scala?) can be extended and passed around with the additional info you 
might need such as a slowly building list of string segment paths. You can 
then signal that your at a leaf node of your tree by calling end or the 
likes - and return a single Path with the full linear evaluation of that 
point in the tree recursion.

So how I'd do it is implement my own directive called Segment maybe a 
little like this:

public abstract class SegmentDirectives extends AllDirectives


public Route segment(String segment, Route inner) {
return this.mapRequestContext((innerCtx) -> {
//You could put some logic here and then do something - you can control the 
RequestContext which gets passed to the child
//Hence you can control whether the lower level stuff get invokes or not 
based on what the full path is
 return new RequestContext(new 
MySuperNewRequestContextWhichObviouslyImplementsRequestContextInterface(
innerCtx.delegate(), segment));
}
}



}



Your new RequestContext impl could then just keep a List - that it made 
available as a PathMatcher when you chose to 'finish' your tree like:

public Route endSegment(Route innerRoute){
return this.mapRequestContext(innerCtx-> {
if(innerCtx instanceof MySuperNewRequest.){
return path(((MySuperNewRequest)innerCtx).getPathMatcherForBuiltPaths, 
innerRoute);
}
})
}

Then simply do a route a little like:

segment("v1", { segment("orders", {end({whatever})}), segment("customers", 
{end({whatever})}) }


Well obviously the 'this is how id do it' bit is a lie - I wouldn't do it. 
I'd probably ask myself why I had hundreds of apis and wanted to list those 
all in 1 mega file using a nested tree that either presumably is going to 
flip flop all over the classes in the project, or move further and further 
to the right of the screen as it becomes deeper.

I know the reality of software development is generally you get stuck with 
tough situations like that from historical decisions so fair enough if its 
really required. At the end of the day though - even writing 300 linear 
apis reusing PathMatcher variables preconfigured to do most of common 
situations can end up the same kind of amount of code as the nest 
equivalent. I know it doesn't feel like it initially but keep faith! :)

For context our largest service has around 10 classes which implement Route 
- each class probably has about 6 apis in it, and all of these are pulled 
in using Guice multi-binding - meaning I end up with 10 beautifully crafted 
readable classes called things like V1OrdersRoute, V2OrdersRoute, 
V1CustomersRoute and then all have the same OAUTH2 authentication 
protections and error logging applied when the multi-binding is injected 
and connected up to the Route flow on the HTTP server ensuring no-one goes 
without good security protocols or basic access logging. In my HTTPServer I 
simply put @Inject private MultiBinder allMyRoutes and attach it 
into a tree with the above stated requirements.

On Friday, 17 March 2017 11:25:09 UTC, Alan Burlison wrote:
>
> I'm sure I must be missing something here because I can't believe path 
> matching in Akka HTTP could be broken in the way it seems to be, because 
> it would be unusable for anything other than toy applications if it was: 
>
> I'm composing route handing from a top-level handler and sub-handlers 
> like this: 
>
> pathPrefix("root") { 
>concat( 
>   pathPrefix("service1") { service1.route }, 
>   pathPrefix("service2") { service2.route } 
>) 
> } 
>
> where service1.route etc returns the sub-route for the associated 
> sub-tree. That works fine with a path of say "/root/service1", but it 
> *also* matches "/rootnotroot/service1", because pathPrefix() just 
> matches any arbitrary string prefix and not a full path segment. And if 
> I use path() instead of pathPrefix() it tries to match the entire 
> remaining path. What I'm looking for is something along the lines of 
> segment() where that fully matches just the next path segment and leaves 
> the remaining path to be matched by inner routes, but there doesn't seem 
> to be such a thing. 
>
> What am I missing? 
>
> Thanks, 
>
> -- 
> Alan Burlison 
> -- 
>

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to 

[akka-user] Re: Akka Source.queue (Java DSL query)

2017-02-14 Thread Daniel Stoner
Thanks all for the responses. I did utilise something akin to Ians solution 
of having a visible channel to send responses back on in the end.

To retrieve the SourceQueueWithComplete I used:
SourceQueueWithComplete<Pair<Request, ResponsePromise<Request, Response>>> 
queue = Source
.<Pair<Request, ResponsePromise<Request, 
Response>>>queue(bufferSize, OverflowStrategy.dropNew())
.via(flow).run(materializer)

You'll note a few points:
A) It is the run method which returns the materialized stream SourceQueue
B) I don't just take Requests into my streams - but Pairs of requests and 
response promises (An async callback function basically)

In the end though I found the semantics of Streams a little cumbersome to 
get quite production ready. Previously my Journal implementation was 
creating a stream per batched write to the database. Now I have a Singleton 
queue and need it to be long-lived. Akka Streams collapse in event of 
exception by default so I had to watch out for this happening (A lot of 
production debugging to find out why, how and whether my stream had 
collapsed turned up the very useful 'watchTermination' method which allows 
you to watch and log the exception on stream collapses.

Final code looked a little like this:
   protected SourceQueueWithComplete<Pair<Request, ResponsePromise<Request, 
Response>>> createStream(
int bufferSize,
int actionsPerSecond,
Flow<Pair<Request, ResponsePromise<Request, Response>>, 
Pair<CompletionStage, ResponsePromise<Request, Response>>, 
NotUsed> flow
) {

return Source
.<Pair<Request, ResponsePromise<Request, 
Response>>>queue(bufferSize, OverflowStrategy.dropNew())
.via(flow)

.withAttributes(ActorAttributes.withSupervisionStrategy(Supervision.getResumingDecider()))
.map(this::tryResponseFulfilPromise)
.throttle(actionsPerSecond, Duration.apply(1, 
TimeUnit.SECONDS), actionsPerSecond, ThrottleMode.shaping())
.watchTermination(watchTerminationAndLog(this.getClass()))
.to(Sink.ignore())
.run(materializer);
}

private CompletionStage 
tryResponseFulfilPromise(Pair<CompletionStage, 
ResponsePromise<Request, Response>> in) {
try {
return in
.first()
.thenApplyAsync(res -> {
fulfilPromise(in.second(), res);
return Done.getInstance();
})
.exceptionally(e -> {
fulfilPromise(in.second(), e);
return Done.getInstance();
});
} catch (Exception e) {
fulfilPromise(in.second(), e);
}
return CompletableFuture.completedFuture(Done.getInstance());
}

Finally - I use a per-request created stream to add to the queue, collect 
the response promises and mapAsync the response promises into their 
relevant output. This leaves me with the method signature I required for 
the Journal.

Would I recommend others try this approach? Definitely not.

I originally took on this task as we use Dynamo for database access, and it 
throws exceptions if you go beyond your set capacity. During Cluster 
startup we would frequently hammer the database and immediately go over 
whatever limit we set. (Normal DB usage is 5 reads/5 writes per second, but 
cluster startup easily surpassed 1500 reads per second). What I wanted to 
do was in essence slow down cluster shard startup.
However - shortly after switching all my reads and writes to the database 
to use Queue like this I realised that a surprising number of reads are 
performed during the write phase and that it became increasingly difficult 
to maintain such complex and generalised code. Perhaps the Scala equivalent 
syntax is more manageable but I was very tempted to go back to allowing the 
failures on the DB to occur during startup and try to put in something more 
basic in terms of custom throttling.

Thanks kindly,
Daniel Stoner

On Thursday, 14 July 2016 15:13:57 UTC+1, Daniel Stoner wrote:
>
> Recently I spotted a great example of how to use the Source.queue feature 
> in Streams to pre-materialise a flow and then pass events into it 
> independently.
> http://stackoverflow.com/a/33415214/5142410
>
> The examples utilising Actors were tempting but would over-complicate my 
> use case - which is to throttle writes to a database in a custom 
> persistence Journal implementation.
>
> Using Source.queue for the life of me I cannot work out how to get the 
> SourceQueue from which to then 'offer' out of this flow within the Java DSL.
>
> Scala example was:
>
> val queue = Source.queue(bufferSize, overflowStrateg

Re: [akka-user] Akka clusters using cluster client causing quarantined nodes (v2.3.11)

2017-01-05 Thread Daniel Stoner
In my organisation we had quite a number of problems with Cluster 
quarantine behaviour both in AWS and on our own local PCs (We launched a 
cluster of 5 actor systems during test so that tests emulated as closely as 
possible our intended production environment).

What we found was that quarantine happened when CPU usage on the box was 
very high. In essence the JVM was suffering from high CPU and for whatever 
reason the threads which would normally be communicating back and forth in 
the cluster did not get a chance to run at the interval they would need to 
in order to avoid Quarantine.

We also found that this occurred if there was a particularly long stop the 
world garbage collection.

Finally another cause can be if you starve the default fork join thread 
pool - for instance by using Java 8 parallel streams or if you starve the 
general akka thread pool by consuming threads in async processes but don't 
define dispatchers appropriately.

What helped us identify this issue was
1) Historic CPU and JVM garbage collection information [Leave JConsole or a 
similar free JMX Monitoring tool connected to your prod cluster overnight - 
chances are JMX will disconnect at the same time as the quarantine happens 
which means the useful info is right at the end of the graph when you come 
in the next day :)]

2) Lots of logging on various processes of the form timestamp|what job you 
are engaged in|what part of the process you are [Start/Middle/End etc]. We 
then processed this information into graphs to see that some processes had 
lots of work available but were not actually processing [Thread 
starvation]. We solved this by using Akka streams more and specifying the 
dispatcher explicitly wherever we felt possible.

3) Ensuring that all Futures were handled with defined dispatchers when 
doing things such as mapAsync or the likes. At least then if starvation was 
to occur it would be isolated and easier to identify and wouldn't 
compromise the default features of the cluster.

Finally - you can mess around with the transport-failure-detector to set 
acceptable pauses and heartbeat intervals [I'm no expert so this may not be 
directly related to quarantine but I believe it helped us]
akka.remote.transport-failure-detector {
  acceptable-heartbeat-pause = 1 s
  heartbeat-interval = 200 ms
}

We did feel however that quarantined services was just a symptom of another 
hard to debug issue and things like this config just make that symptom 
appear less often (Great if you want production to work for the next day or 
two but bad next week when it properly collapses anyway as the root cause 
got worse).

And I would also 100% recommend not using auto-downing. Coming up with a 
good strategy for downing can either be quite easy (Can I reach the 'leader 
node'? [The node in my cluster who claims to have the first startup time] 
Yes -> Great I can live i'm part of the Cluster! No -> I should really 
suicide myself as i'm part of a split brain on the bad side of a network 
partition) or incredibly difficult if you are in an auto-scaling 
environment like AWS and you wish to ensure that always the majority side 
of a network partition survives and everyone doesn't decide to suicide 
themselves.
You can subscribe to events such as MemberUp or MemberDown to then initiate 
your detection/suicide strategy quite easily :)

[My team had a lot of fun setting up meeting rooms with bookings for 
'Suicide Pact Discussion' but it wasn't nearly as fun as the meetings on 
what we do when orphaned children in actor trees need to be sent Poison 
commands].

On Wednesday, 4 January 2017 17:02:55 UTC, Tyler Brummett wrote:
>
> Thank you for that suggestion. We are stuck on 2.3.x for the time being, 
> but have plans to move to 2.4.x in the not so distant future, but it would 
> still be beneficial to solve this problem now.
>
> We've upgraded our Akka version to 2.3.15 and it seems that the quarantine 
> messages are no longer showing up in our logs. Which is great!
> However, some of the acknowledgement messages that we normally get during 
> one of our nightly processes are not making it back to the requesting 
> actor. This behavior is different since the upgrade happened and seems to 
> be the only thing preventing us from moving forward.
>
> Context:
> A request message is sent from one actor in actorSystemA to another actor 
> in actorSystemB. The actor in actorSystemB delegates work to be done by 
> some other back end process. Said back end process may take a few minutes 
> depending on the size of the data result coming back. Therefore, we have 
> 'hand-rolled' a solution to use a private ActorRef on the responding actor 
> in actorSystemB to periodically send 'update' messages every few seconds to 
> let the requesting actor in actorSystemA know that the back end process is 
> still processing its request.
>
> In every case, before the upgrade, we were always getting these 'update' 
> messages or acknowledgements 

Re: [akka-user] Re: Journal Plugin Java API question regarding doAsyncWriteMessages

2016-10-12 Thread Daniel Stoner
Beware that if you don't handle serialisation failures as rejection - then 
you can easily get into a state whereby another actor keeps sending a 
message that simply will not serialise when persisted. Your only awareness 
of this will be the log errors that might occur in your JournalImpl and 
your persistent actor will keep restarting because of the failure coming 
from the journal.

This can end up being experienced by your read requests to the datastore 
going through the roof and if like me you use a store which throttles based 
on reads per second then you can end up denying service to the rest of your 
app.

On the flip side I've seen journal implementations which rely implicitly on 
gapless sequences for reading from the DB - for instance the dynamodb 
journal impl (last time I looked at least) reads messages in batches of 50 
until it finds no messages in the batch - despite this implicit requirement 
it does support rejections. If you have a continuous series of 50 
rejections then your in really deep trouble.

My best suggestion would be to set up a simple test which initialises a 
persistent actor with a journal which rejects everything. Take a look at 
messages like aroundPreStart and onPersistRejected and onPersistFailure and 
see what behaviour you want your application to have.

As a final comment - if your implementing your own Journal beware that 
EventAdapters are not free and if you plan on allowing the feature in 
future (And it is incredibly useful whenever you plan to change your beans 
from whatever might be stored to some new version) then you must implement 
this in your Journal. We implemented EventAdapter on the read side only (EG 
You can read an old version of a Bean and auto-convert it into a newer 
version) and the code for that was as simple as:

(Config is com.typesafe.config.Config)
EventAdapters eventAdapters = EventAdapters.apply(actorSystem, actorConfig);

private List persistentFromByteBuffer(ByteBuffer b) throws 
NotSerializableException {
PersistentRepr originalRepr = 
serialization.deserialize(ByteString.fromByteBuffer(b).toArray(), 
PersistentRepr.class).get();
EventSeq adapted = 
config.eventAdapters.get(originalRepr.payload().getClass()).fromJournal(originalRepr.payload(),
 
originalRepr.manifest());

return 
JavaConversions.asJavaCollection(adapted.events()).stream().map(originalRepr::withPayload).collect(Collectors.toList());
}

On Tuesday, 11 October 2016 15:42:14 UTC+1, Patrik Nordwall wrote:
>
>
>
> On Tue, Oct 11, 2016 at 3:55 PM, Ian Grima  > wrote:
>
>> Thanks for your reply :). If I were not to support rejections, that would 
>> imply that if a serialization error occurs i would simply fail the entire 
>> Future correct? 
>>
>
> yes, right
>  
>
>> Gapless sequence numbers does indeed sound like a worthy consideration.
>>
>> Thanks Again,
>> Ian Grima
>>
>> On Tuesday, 11 October 2016 12:21:38 UTC+2, Patrik Nordwall wrote:
>>>
>>>
>>>
>>> On Tue, Oct 11, 2016 at 11:24 AM, Ian Grima  wrote:
>>>
 I am still looking for an opinion on this matter if anyone could help :)



 On Saturday, 8 October 2016 15:31:32 UTC+2, Ian Grima wrote:
>
> Hi, 
>
>   I was reading up on the API of implementing a custom journal plugin 
> in Java. Could you please tell me if I have understood this section 
> correctly.
>
> With regards to the following API:
>
>  Future> doAsyncWriteMessages(Iterable<
> AtomicWrite> messages);
>
> whose Java docs can be found here:
>  
> http://doc.akka.io/docs/akka/snapshot/java/persistence.html#Journal_plugin_API
>  
> 
>
>
> Specifically this line : "If there are failures when storing any of 
> the messages in the batch the returned `Future` must be completed 
> with failure." 
>
> and this section: "The journal can also signal that it rejects 
> individual messages(`AtomicWrite`) by the returned 
> `IterableOptionalException`. The returned `Iterable` must
> have as many elements as the input `messages` `Iterable`. Each 
> `Optional` element signals if the corresponding `AtomicWrite` is 
> rejected or not, with an exception describing
> the problem. Rejecting a message means it was not stored, i.e. it 
> must not be included in a later replay. Rejecting a message is 
> typically done before attempting to store it,
> e.g. because of serialization error."
>
>
> I had difficulty understanding how it would be possible to reject an 
> individual atomic write when the first line states that the future must 
> be 
> completed with failure if storing a single message fails.
>

>>> Rejections are not failure to store them, but it can be seen as a 
>>> validation error, i.e. something that is performed before the database is 
>>> 

[akka-user] Event Adapters and custom journals

2016-08-10 Thread Daniel Stoner
Hi all,

Having read the docs on EventAdapters (
http://doc.akka.io/docs/akka/2.4.9-RC2/java/persistence.html#event-adapters-java)
and seeing the example of how to ignore particular classes I thought this
would be a perfect solution for me.

Having implemented the example I have tried to the following the
application.conf example where event-adapter is defined inside the
'journal.inmem' config location (EG The location specified by
journal.plugin value). As such I have:

persistence {
journal {
plugin = "com.osp.scs.libnado.persistence.dynamodb"
}
..more things
}

com.osp.scs.libnado.persistence.dynamodb {
event-adapters {
class-not-found =
"com.osp.scs.libnado.akka.codec.ClassNotFoundEventAdapter"
  }
  event-adapter-bindings {
"com.osp.scs.libnado.akka.codec.JacksonSerializable" =
class-not-found
  }
}

Now it has suddenly occurred to me (since this isn't working) that actually
event adaptation is something that has to be implemented in each Journal
implementation that exists and not something that Akka is providing before
it gets to the journal itself.

Is this the case or am I just putting my config in the wrong place?
If the former the documentation on how to write your own custom Journal may
need updating to reflect the intended requirement to support this feature.

Further - are there any shortcuts for implementing the requirements such as
there is with Serialization and the serialisation-bindings and other
assorted config working. EG You just
utilise SerializationExtension.get(system).serialize(entity).

Thanks kindly,
Daniel Stoner
-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited and Fabled 
is a trading name of Marie Claire Beauty Limited, both members of the Ocado 
Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time. 
 The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: At-least once delivery and the ActorPath (Java examples)

2016-06-23 Thread Daniel Stoner
For reference - and having conducted a lot of testing on various situations.

Both patterns would work - either delivering to a delegate actor that you 
have just created, or delivering to a fixed path - such as a cluster 
sharding location.

There is a benefit to the latter - in that during recovery you may perform 
a 'create a delegate, deliver to delegate' and subsequently (still in 
recovery) 'confirm delivery' - leaving your delegate actor in essence 
orphaned/requiring cleanup.
By sending the messages to yourself - should you deliver and confirm in 
recovery phase then you simply do not deliver the message to yourself and 
hence perform no action.

I'm keen to point out that whatever message you deliver to yourself you 
should not persist if you follow this second pattern.

Thanks kindly,
Daniel Stoner

On Wednesday, 22 June 2016 08:33:36 UTC+1, Daniel Stoner wrote:
>
> Hi all,
>
> The semantics of At-least once delivery still confuse me to some degree. I 
> understand that you utilise:
> deliver(someActorPath, (Long id)-> someMessage.withId(id))
>
> And then later
> receive(
> SomeMessageDone.class,
> msg->confirmDelivery(msg.id)
> )
>
> My query relates to the ActorPath. It is my understanding that during a 
> restart of the server - at-least once delivery only attempts re-delivery 
> after recovery on the actor has finished.
> EG A sharded persistent actor wakes up and goes through all of its 
> recovery messages. In these messages it would have chosen to deliver 5 
> things - and confirm 3 of them. As such once RecoveryCompleted is received 
> - it then carries out the 2 deliveries of things which were not confirmed.
> In this sense - the Long id for the deliveryId would always start at 0 
> when an actor first wakes up (Which makes sense since it doesn't really 
> persist that id anywhere)
>
> Does this mean - that the following - creating a brand new actor to 
> delegate work to as part of delivery would make sense or not?
>
> ActorRef brandNewActorDedicatedJustToThisMessage = 
> system.actorOf(props...)
> delivery(brandNewActor.path(), someMessage)
>
> In this sense - the first call to deliver might be to path 
> /user/someRandomX - and after a restart the new attempt to delivery will be 
> to a completely different /user/someRandomY. The functionality would work 
> fine however.
>
> Or am I incorrect - and at-least once delivery will simply re-deliver to 
> the same path over and over again EG /user/someRandomX which no longer 
> exists after a restart?
> In which case the only sensible path to send to would seem to be the 
> sharded actor itself - EG I should follow a pattern of:
> deliver(self().path(), .)
>
> Thanks kindly,
> Daniel Stoner
> -- 
> Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
> daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com
>
>
> Notice:  This email is confidential and may contain copyright material of 
> members of the Ocado Group. Opinions and views expressed in this message 
> may not necessarily reflect the opinions and views of the members of the 
> Ocado Group. 
>
>  
>
> If you are not the intended recipient, please notify us immediately and 
> delete all copies of this message. Please note that it is your 
> responsibility to scan this message for viruses. 
>
>  
>
> Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
> of the Ocado Group.
>
>  
>
> References to the “Ocado Group” are to Ocado Group plc (registered in 
> England and Wales with number 7098618) and its subsidiary undertakings (as 
> that expression is defined in the Companies Act 2006) from time to time.  
> The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
> Hatfield Business Park, Hatfield, Herts. AL10 9NE.
>

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>

[akka-user] At-least once delivery and the ActorPath (Java examples)

2016-06-22 Thread Daniel Stoner
Hi all,

The semantics of At-least once delivery still confuse me to some degree. I
understand that you utilise:
deliver(someActorPath, (Long id)-> someMessage.withId(id))

And then later
receive(
SomeMessageDone.class,
msg->confirmDelivery(msg.id)
)

My query relates to the ActorPath. It is my understanding that during a
restart of the server - at-least once delivery only attempts re-delivery
after recovery on the actor has finished.
EG A sharded persistent actor wakes up and goes through all of its recovery
messages. In these messages it would have chosen to deliver 5 things - and
confirm 3 of them. As such once RecoveryCompleted is received - it then
carries out the 2 deliveries of things which were not confirmed.
In this sense - the Long id for the deliveryId would always start at 0 when
an actor first wakes up (Which makes sense since it doesn't really persist
that id anywhere)

Does this mean - that the following - creating a brand new actor to
delegate work to as part of delivery would make sense or not?

ActorRef brandNewActorDedicatedJustToThisMessage =
system.actorOf(props...)
delivery(brandNewActor.path(), someMessage)

In this sense - the first call to deliver might be to path
/user/someRandomX - and after a restart the new attempt to delivery will be
to a completely different /user/someRandomY. The functionality would work
fine however.

Or am I incorrect - and at-least once delivery will simply re-deliver to
the same path over and over again EG /user/someRandomX which no longer
exists after a restart?
In which case the only sensible path to send to would seem to be the
sharded actor itself - EG I should follow a pattern of:
deliver(self().path(), .)

Thanks kindly,
Daniel Stoner
-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka-HTTP JavaDSL and PathMatcher3 and upwards

2016-05-24 Thread Daniel Stoner
For anyone wondering - the answer was to use the following:
>
> pathPrefix(
>  PathMatchers
>  .segment(PATH_V1)
>  .slash(PATH_ORDERS)
>  .slash(PATH_PARAM_UUID),
>  orderId -> pathPrefix(
>  PathMatchers
>  .segment(PATH_MAJOR_VERSION)
>  .slash(PATH_PARAM_MAJOR_VERSION)
>  .slash(PATH_MINOR_VERSION)
>  .slash(PATH_PARAM_MINOR_VERSION)


This only made sense to me after someone happened to randomly show me their
Spray API from a while back :) Having never used Spray in Scala I've found
it quite difficult to guess what you can do where.

Thanks kindly,
Daniel Stoner

On 24 May 2016 at 14:50, Daniel Stoner <daniel.sto...@ocado.com> wrote:

> PathDirectives has path(PathMatcher1... and path(PathMatcher2... but no
> path(PathMatcher3 or onwards.
>
> I get that there has to be a limit at some point for people who want 52
> variables in their URLs - but Is there some alternative way I am supposed
> to write the equivalent of this code which matches a URL
> of the form v1/orders//major//minor/
>
> path(
>>  PathMatchers
>>  .segment(PATH_V1)
>>  .slash(PATH_ORDERS)
>>  .slash(PATH_PARAM_UUID)
>>  .slash(PATH_MAJOR_VERSION)
>>  .slash(PATH_PARAM_MAJOR_VERSION)
>>  .slash(PATH_MINOR_VERSION)
>>  .slash(PATH_PARAM_MINOR_VERSION),
>>  (orderId, majorVersion, minorVersion)-> doSomeBusinessThing
>
>
>
> Thanks in advance!
> Daniel Stoner
>
> --
> Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
> daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com
>
>


-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka-HTTP JavaDSL and PathMatcher3 and upwards

2016-05-24 Thread Daniel Stoner
PathDirectives has path(PathMatcher1... and path(PathMatcher2... but no
path(PathMatcher3 or onwards.

I get that there has to be a limit at some point for people who want 52
variables in their URLs - but Is there some alternative way I am supposed
to write the equivalent of this code which matches a URL
of the form v1/orders//major//minor/

path(
>  PathMatchers
>  .segment(PATH_V1)
>  .slash(PATH_ORDERS)
>  .slash(PATH_PARAM_UUID)
>  .slash(PATH_MAJOR_VERSION)
>  .slash(PATH_PARAM_MAJOR_VERSION)
>  .slash(PATH_MINOR_VERSION)
>  .slash(PATH_PARAM_MINOR_VERSION),
>  (orderId, majorVersion, minorVersion)-> doSomeBusinessThing



Thanks in advance!
Daniel Stoner

-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka-Http with Jackson confusion over proper usage of completeWithFutureResponse

2016-05-23 Thread Daniel Stoner
Hi All,

Hopefully a quick one asking for some advice/pointers on how to marshal and
complete a HTTP route using CompletionStage.

I've taken a look through the latest v2.5.6 documentation inc. The
PetClinic example. What I am trying to do is very similar - marshalling a
response from a high level server side API - but unlike in PetClinic where
it can do:
"return complete(StatusCodes.OK, thePet, Jackson.marshaller());"

Where we have thePet variable of type 'Pet' i have an CompletionStage
since I have PatternsCS.ask'ed an actor to respond to the request.

What confuses me is that the signature for 'complete' directive is:
RouteAdapter complete(StatusCode status, T value, Marshaller<T,
*RequestEntity*> marshaller)

In other words it is the marshaller purely just for the entity that I
return in my response - Great as I can simply use Jackson.marshaller().
But the equivalent method that takes in a CompletionStage is:
RouteAdapter completeWithFuture(CompletionStage value, Marshaller<T,
*HttpResponse*> marshaller)

In other words my CompletionStage must have a marshaller which can
convert Pet to the whole HttpResponse - headers, statusCode and entity
included - Meaning I now have to write some kind of mixed
Marshaller/Business pojo to HttpResponse mapper.

PetClinic link:
https://github.com/akka/akka/blob/v2.4.6/akka-http-tests/src/main/java/akka/http/javadsl/server/examples/petstore/PetStoreExample.java

Thanks hugely!
Dan

-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka HTTP Java Directives examples

2016-04-20 Thread Daniel Stoner
Thank you Konrad.

Having just finished the 2.4.4 upgrade I think you've all done a great job
on moving user closer to the native Java 8 styling which can only be a good
thing long term for adopters of the framework.

On 20 April 2016 at 13:23, Konrad Malawski <konrad.malaw...@lightbend.com>
wrote:

> As explained in the release notes on Akka.io/news we had to expedite the
> release of 2.4.4 as 2.4.3 had some blocker issues for users of https.
> Since then we continue to focus explicitly on the javadsl, spent entire
> weeks specifically on polishing it recently :)
>
> We think we'll be ready to ship it in 2.4.5 which is just a few days away
> actually.
>
> --
> Konrad Malawski
>
> From: Daniel Stoner <daniel.sto...@ocado.com> <daniel.sto...@ocado.com>
> Reply: Daniel Stoner <daniel.sto...@ocado.com> <daniel.sto...@ocado.com>
> Date: 20 April 2016 at 14:02:21
> To: Konrad Malawski <konrad.malaw...@typesafe.com>
> <konrad.malaw...@typesafe.com>
> CC: akka-user@googlegroups.com <akka-user@googlegroups.com>
> <akka-user@googlegroups.com>
> Subject:  Re: [akka-user] Akka HTTP Java Directives examples
>
> Hi Konrad/Others,
>>
>> I notice that Github issue was taken over into:
>> https://github.com/akka/akka/pull/20088 and has not made it yet into
>> v2.4.4.
>>
>> Is it on the roadmap for 2.4.5 or is there any update about plans for
>> this?
>>
>> Thanks kindly,
>> Daniel Stoner
>>
>> On 15 February 2016 at 16:00, Konrad Malawski <
>> konrad.malaw...@typesafe.com> wrote:
>>
>>> It's not yet merged nor complete.
>>> We'll focus on it after shipping 2.4.2 (tomorrow) and are hoping to
>>> include it in 2.4.3,
>>> timeline wise I can't promise anything though.
>>>
>>> Hope this helps.
>>>
>>> --
>>> Cheers,
>>> Konrad 'ktoso’ Malawski
>>> Akka <http://akka.io> @ Typesafe <http://typesafe.com>
>>>
>>> On 15 February 2016 at 16:58:46, Daniel Stoner (daniel.sto...@ocado.com)
>>> wrote:
>>>
>>> Thanks Konrad for the incredibly quick reply, I look forward to seeing
>>> the new mechanisms.
>>>
>>> By soon - what kind of timeline are you thinking?
>>>
>>> At the moment we are prototyping a product in akka-http so we aren't
>>> worried (at this stage at least) about production readiness too much and if
>>> we have to rewrite a layer of the app then so be it.
>>> Is there a SNAPSHOT/nightly build version or the likes where I can try
>>> out the new DSL?
>>>
>>> Thanks again,
>>> Dan
>>>
>>> On 15 February 2016 at 15:49, Konrad Malawski <
>>> konrad.malaw...@typesafe.com> wrote:
>>>
>>>> Hi Daniel,
>>>> Before I dive into your question I'd like to highlight that the Java
>>>> directives are undergoing an intense
>>>> rewrite just now (they're still experimental, so that's why we must do
>>>> it now),
>>>> and will be way more elastic soon:
>>>> https://github.com/akka/akka/pull/19632/files?diff=unified
>>>>
>>>> So it's not a very good time to invest heavily into the existing
>>>> JavaDSL.
>>>>
>>>> I'll give your exact need a closer look a bit later to see how we cover
>>>> it in the new DSL.
>>>>
>>>> --
>>>> Cheers,
>>>> Konrad 'ktoso’ Malawski
>>>> Akka <http://akka.io> @ Typesafe <http://typesafe.com>
>>>>
>>>> On 15 February 2016 at 16:45:14, Daniel Stoner (daniel.sto...@ocado.com)
>>>> wrote:
>>>>
>>>> The current documentation for AkkaHttp directives:
>>>>
>>>> http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.3/java/http/routing-dsl/directives/index.html#directives-java
>>>>
>>>> Says you can filter requests and modify requests/responses.
>>>>
>>>> I'd love to be able to implement such a thing - but I've seen no
>>>> examples of custom directives in Java.
>>>>
>>>> Does anyone have an example they could post that shows modifying
>>>> request/response. My use case is that I need to implement several things
>>>> (As separate directives):
>>>> 1) The equivalent of server access logs (Saying someone tried to go to
>>>> /GET?queryParam=x and this took 120ms to respond)
>>>> 2) Authentication via an external OAUTH system where requests may be
>>>>

Re: [akka-user] Akka HTTP Java Directives examples

2016-04-20 Thread Daniel Stoner
Hi Konrad/Others,

I notice that Github issue was taken over into:
https://github.com/akka/akka/pull/20088 and has not made it yet into v2.4.4.

Is it on the roadmap for 2.4.5 or is there any update about plans for this?

Thanks kindly,
Daniel Stoner

On 15 February 2016 at 16:00, Konrad Malawski <konrad.malaw...@typesafe.com>
wrote:

> It's not yet merged nor complete.
> We'll focus on it after shipping 2.4.2 (tomorrow) and are hoping to
> include it in 2.4.3,
> timeline wise I can't promise anything though.
>
> Hope this helps.
>
> --
> Cheers,
> Konrad 'ktoso’ Malawski
> Akka <http://akka.io> @ Typesafe <http://typesafe.com>
>
> On 15 February 2016 at 16:58:46, Daniel Stoner (daniel.sto...@ocado.com)
> wrote:
>
> Thanks Konrad for the incredibly quick reply, I look forward to seeing the
> new mechanisms.
>
> By soon - what kind of timeline are you thinking?
>
> At the moment we are prototyping a product in akka-http so we aren't
> worried (at this stage at least) about production readiness too much and if
> we have to rewrite a layer of the app then so be it.
> Is there a SNAPSHOT/nightly build version or the likes where I can try out
> the new DSL?
>
> Thanks again,
> Dan
>
> On 15 February 2016 at 15:49, Konrad Malawski <
> konrad.malaw...@typesafe.com> wrote:
>
>> Hi Daniel,
>> Before I dive into your question I'd like to highlight that the Java
>> directives are undergoing an intense
>> rewrite just now (they're still experimental, so that's why we must do it
>> now),
>> and will be way more elastic soon:
>> https://github.com/akka/akka/pull/19632/files?diff=unified
>>
>> So it's not a very good time to invest heavily into the existing JavaDSL.
>>
>> I'll give your exact need a closer look a bit later to see how we cover
>> it in the new DSL.
>>
>> --
>> Cheers,
>> Konrad 'ktoso’ Malawski
>> Akka <http://akka.io> @ Typesafe <http://typesafe.com>
>>
>> On 15 February 2016 at 16:45:14, Daniel Stoner (daniel.sto...@ocado.com)
>> wrote:
>>
>> The current documentation for AkkaHttp directives:
>>
>> http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.3/java/http/routing-dsl/directives/index.html#directives-java
>>
>> Says you can filter requests and modify requests/responses.
>>
>> I'd love to be able to implement such a thing - but I've seen no examples
>> of custom directives in Java.
>>
>> Does anyone have an example they could post that shows modifying
>> request/response. My use case is that I need to implement several things
>> (As separate directives):
>> 1) The equivalent of server access logs (Saying someone tried to go to
>> /GET?queryParam=x and this took 120ms to respond)
>> 2) Authentication via an external OAUTH system where requests may be
>> rejected with the equivalent of access denied responses. [Perhaps this is
>> better done as a Handler implementation? What are your thoughts?]
>>
>> I get how I could potentially nest handlers in a way such that I can do:
>> pathSingleSlash().route(
>>  get(path("home").route(handleWidth(new
>> AuthenticationHandler(homeHandler)))
>>  post(path("home").route(handleWidth(new
>> AuthenticationHandler(homePostHandler)))
>> )
>>
>> But I think Directives sound like the better place to put this logic at
>> the top of the route tree rather than repeat it constantly in all the
>> bottom leaves.
>>
>> Thanks kindly,
>> Daniel Stoner
>> --
>> Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
>> daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com
>>
>>
>> Notice:  This email is confidential and may contain copyright material of
>> members of the Ocado Group. Opinions and views expressed in this message
>> may not necessarily reflect the opinions and views of the members of the
>> Ocado Group.
>>
>>
>>
>> If you are not the intended recipient, please notify us immediately and
>> delete all copies of this message. Please note that it is your
>> responsibility to scan this message for viruses.
>>
>>
>>
>> Fetch and Sizzle are trading names of Speciality Stores Limited, a member
>> of the Ocado Group.
>>
>>
>>
>> References to the “Ocado Group” are to Ocado Group plc (registered in
>> England and Wales with number 7098618) and its subsidiary undertakings (as
>> that expression is defined in the Companies Act 2006) from time to time.
>> The registered office of Ocado Group plc is Titan Court

[akka-user] Re: Actor System UIDs and Quarantine behaviour

2016-02-25 Thread Daniel Stoner
A little more investigation seems to yield that we were wrong about the
UIDs being the issue.

We get the following log:
12:45:57.499 [clusterTestActorSystem-akka.actor.default-dispatcher-5] INFO
 a.c.Cluster(akka://clusterTestActorSystem) - Cluster Node [akka.ssl.tcp://
clusterTestActorSystem@127.0.0.1:2551] - New incarnation of existing member
[Member(address = akka.ssl.tcp://clusterTestActorSystem@127.0.0.1:2553,
status = Up)] is trying to join. Existing will be removed from the cluster
and then new member will be allowed to join.

12:45:57.500 [clusterTestActorSystem-akka.actor.default-dispatcher-5] INFO
 a.c.Cluster(akka://clusterTestActorSystem) - Cluster Node [akka.ssl.tcp://
clusterTestActorSystem@127.0.0.1:2551] - Marking unreachable node
[akka.ssl.tcp://clusterTestActorSystem@127.0.0.1:2553] as [Down]

12:45:58.178 [clusterTestActorSystem-akka.actor.default-dispatcher-17] INFO
 a.c.Cluster(akka://clusterTestActorSystem) - Cluster Node [akka.ssl.tcp://
clusterTestActorSystem@127.0.0.1:2551] - Leader is removing unreachable
node [akka.ssl.tcp://clusterTestActorSystem@127.0.0.1:2553]

12:45:58.187 [clusterTestActorSystem-akka.actor.default-dispatcher-19] WARN
 akka.remote.Remoting - Association to [akka.ssl.tcp://
clusterTestActorSystem@127.0.0.1:2553] having UID [-1829330708] is
irrecoverably failed. UID is now quarantined and all messages to this UID
will be delivered to dead letters. Remote actorsystem must be restarted to
recover from this situation.

Which I interpret as:

   - Because we turned off auto-downing of nodes, despite nodes being
   unreachable/terminated - they remain in the cluster list.
   - When we re-create a new incarnation of the node the cluster realises
   its a new incarnation (Due to the unique UID).
  - The cluster removes the old incarnation.
  - And then immediately quarantines the new incarnation. (This is an
  assumption - I can't tell what the UID is of the old or the new one - and
  have assumed the quarantined instance to be the new one since it
now fails
  to ever get moved to UP).


There are some obvious solutions that we will carry out - for instance we
should manually down the nodes. But it does seem peculiar that when
removing an old incarnation of a node it 'seemingly' quarantines the new
incarnation.

Thanks kindly,
Daniel Stoner

On 25 February 2016 at 10:57, Daniel Stoner <daniel.sto...@ocado.com> wrote:

> Hi,
>
> Recently we've been setting up some testing of our application when
> running as a Cluster. We start 1 actor system on port 2551 as part of our
> test suite.
>
> As part of this individual test we then start further servers on port 2552
> and 2553.
>
> This works great - we have a counting actor that shows the cluster has
> received MemberUp for all 3 nodes and our test succeeds.
>
> We thought we'd take it to the next level - and use the IntelliJ ide's
> feature to run our test suite 100 times to check this wasn't a fluke and
> when we did so we spotted some peculiar behaviour.
>
> For context - Actor System on port 2551 never gets stopped but the actor
> systems on port 2552/2553 get started during the individual test and
> stopped at its end. These are always brand new instances of
> ActorSystem.create(), we are not simply stopping/starting these servers.
>
> After about 30 runs of this test, during shutdown of 2552/2553 its very
> likely they both become quarantined by 2551. (Not a surprise).
> What is a surprise - is that when we re-create a brand new ActorSystem on
> 2552/2553 it is seen as being the same original server (hostname,port,uid)
> - and quarantine behaviour kicks in (IE No-one will talk to it and the test
> fails all further runs).
>
> From this piece of documentation:
> http://doc.akka.io/docs/akka/2.4.2/common/cluster.html
> "The identifier for each node is a hostname:port:uid tuple"
>
> So obviously the hostname and port remain the same when we shutdown and
> restart - but how is the 'uid' generated?
> Is this something based on the JVM/OS Thread things are being created in -
> or is this user configurable - since it seems our problem is that when
> creating new actor systems we are getting uid's which basically aren't very
> unique.
>
> Firstly - is there a way we can see the ActorSystems uid to confirm this
> is the case, and finally is there some way we can specify the uid used (to
> enforce uniqueness)?
>
> Thanks kindly,
> Daniel Stoner
>
> --
> Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
> daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com
>
>


-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinio

[akka-user] Actor System UIDs and Quarantine behaviour

2016-02-25 Thread Daniel Stoner
Hi,

Recently we've been setting up some testing of our application when running
as a Cluster. We start 1 actor system on port 2551 as part of our test
suite.

As part of this individual test we then start further servers on port 2552
and 2553.

This works great - we have a counting actor that shows the cluster has
received MemberUp for all 3 nodes and our test succeeds.

We thought we'd take it to the next level - and use the IntelliJ ide's
feature to run our test suite 100 times to check this wasn't a fluke and
when we did so we spotted some peculiar behaviour.

For context - Actor System on port 2551 never gets stopped but the actor
systems on port 2552/2553 get started during the individual test and
stopped at its end. These are always brand new instances of
ActorSystem.create(), we are not simply stopping/starting these servers.

After about 30 runs of this test, during shutdown of 2552/2553 its very
likely they both become quarantined by 2551. (Not a surprise).
What is a surprise - is that when we re-create a brand new ActorSystem on
2552/2553 it is seen as being the same original server (hostname,port,uid)
- and quarantine behaviour kicks in (IE No-one will talk to it and the test
fails all further runs).

>From this piece of documentation:
http://doc.akka.io/docs/akka/2.4.2/common/cluster.html
"The identifier for each node is a hostname:port:uid tuple"

So obviously the hostname and port remain the same when we shutdown and
restart - but how is the 'uid' generated?
Is this something based on the JVM/OS Thread things are being created in -
or is this user configurable - since it seems our problem is that when
creating new actor systems we are getting uid's which basically aren't very
unique.

Firstly - is there a way we can see the ActorSystems uid to confirm this is
the case, and finally is there some way we can specify the uid used (to
enforce uniqueness)?

Thanks kindly,
Daniel Stoner

-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka HTTP Java Directives examples

2016-02-15 Thread Daniel Stoner
Thanks Konrad for the incredibly quick reply, I look forward to seeing the
new mechanisms.

By soon - what kind of timeline are you thinking?

At the moment we are prototyping a product in akka-http so we aren't
worried (at this stage at least) about production readiness too much and if
we have to rewrite a layer of the app then so be it.
Is there a SNAPSHOT/nightly build version or the likes where I can try out
the new DSL?

Thanks again,
Dan

On 15 February 2016 at 15:49, Konrad Malawski <konrad.malaw...@typesafe.com>
wrote:

> Hi Daniel,
> Before I dive into your question I'd like to highlight that the Java
> directives are undergoing an intense
> rewrite just now (they're still experimental, so that's why we must do it
> now),
> and will be way more elastic soon:
> https://github.com/akka/akka/pull/19632/files?diff=unified
>
> So it's not a very good time to invest heavily into the existing JavaDSL.
>
> I'll give your exact need a closer look a bit later to see how we cover it
> in the new DSL.
>
> --
> Cheers,
> Konrad 'ktoso’ Malawski
> Akka <http://akka.io> @ Typesafe <http://typesafe.com>
>
> On 15 February 2016 at 16:45:14, Daniel Stoner (daniel.sto...@ocado.com)
> wrote:
>
> The current documentation for AkkaHttp directives:
>
> http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.3/java/http/routing-dsl/directives/index.html#directives-java
>
> Says you can filter requests and modify requests/responses.
>
> I'd love to be able to implement such a thing - but I've seen no examples
> of custom directives in Java.
>
> Does anyone have an example they could post that shows modifying
> request/response. My use case is that I need to implement several things
> (As separate directives):
> 1) The equivalent of server access logs (Saying someone tried to go to
> /GET?queryParam=x and this took 120ms to respond)
> 2) Authentication via an external OAUTH system where requests may be
> rejected with the equivalent of access denied responses. [Perhaps this is
> better done as a Handler implementation? What are your thoughts?]
>
> I get how I could potentially nest handlers in a way such that I can do:
> pathSingleSlash().route(
>  get(path("home").route(handleWidth(new
> AuthenticationHandler(homeHandler)))
>  post(path("home").route(handleWidth(new
> AuthenticationHandler(homePostHandler)))
> )
>
> But I think Directives sound like the better place to put this logic at
> the top of the route tree rather than repeat it constantly in all the
> bottom leaves.
>
> Thanks kindly,
> Daniel Stoner
> --
> Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
> daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com
>
>
> Notice:  This email is confidential and may contain copyright material of
> members of the Ocado Group. Opinions and views expressed in this message
> may not necessarily reflect the opinions and views of the members of the
> Ocado Group.
>
>
>
> If you are not the intended recipient, please notify us immediately and
> delete all copies of this message. Please note that it is your
> responsibility to scan this message for viruses.
>
>
>
> Fetch and Sizzle are trading names of Speciality Stores Limited, a member
> of the Ocado Group.
>
>
>
> References to the “Ocado Group” are to Ocado Group plc (registered in
> England and Wales with number 7098618) and its subsidiary undertakings (as
> that expression is defined in the Companies Act 2006) from time to time.
> The registered office of Ocado Group plc is Titan Court, 3 Bishops Square,
> Hatfield Business Park, Hatfield, Herts. AL10 9NE.
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>


-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the op

[akka-user] Akka HTTP Java Directives examples

2016-02-15 Thread Daniel Stoner
The current documentation for AkkaHttp directives:
http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.3/java/http/routing-dsl/directives/index.html#directives-java

Says you can filter requests and modify requests/responses.

I'd love to be able to implement such a thing - but I've seen no examples
of custom directives in Java.

Does anyone have an example they could post that shows modifying
request/response. My use case is that I need to implement several things
(As separate directives):
1) The equivalent of server access logs (Saying someone tried to go to
/GET?queryParam=x and this took 120ms to respond)
2) Authentication via an external OAUTH system where requests may be
rejected with the equivalent of access denied responses. [Perhaps this is
better done as a Handler implementation? What are your thoughts?]

I get how I could potentially nest handlers in a way such that I can do:
pathSingleSlash().route(
 get(path("home").route(handleWidth(new AuthenticationHandler(homeHandler)))
 post(path("home").route(handleWidth(new
AuthenticationHandler(homePostHandler)))
)

But I think Directives sound like the better place to put this logic at the
top of the route tree rather than repeat it constantly in all the bottom
leaves.

Thanks kindly,
Daniel Stoner
-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Keeping business logic and Actor system separate

2015-11-06 Thread Daniel Stoner
When we first began using AKKA we had a similar approach and decided to put 
all the business logic in a service layer. This is a pretty standard Spring 
practice and it made testing a doddle - and also meant we could start 
writing the logic before we'd confirmed our favourite AKKA architecture.

EG:
Actor:
@Inject
MyBusinessService service;

onStartup(){
StaticInjector.get().inject(this);
}

onReceive(Object msg [Really just a json String]){
persistAsyc(msg, service.process(demarshall(msg));
}

Think of that service.process method much like a struts Action - in that 
the msg likely contains a method="updatePrices" bean="someNewPrice" much 
like a Form would in Struts.

This is the side on which we receive a message. In cases where we want to 
send a message - we then used a utility which encapsulates all outbound 
communications. (IE One place which deals with asking and telling things 
and if we switch to JMS communication or HTTP Webservices we could have 
done it in these 2 places).

Our actor layer was relatively clean and just handled persistence, 
sharding, failure and communication logic.
In the end however we found it far more powerful to be able to use AKKA 
features directly throughout business logic and abandoned this approach - 
This was made easier because by this time we'd learned the ins and outs of 
AKKA much more than when we first embarked on the project and so our fear 
of it as a 'new thing to us' went down drastically.

Thanks,
Dan

On Friday, 6 November 2015 03:39:33 UTC, Chinmay Raval wrote:
>
> Thanks Ryan.
>
> Idea was to keep actor infra separate from domain. Also dont want to 
> depend on actors, in future we should be able to migrate the infra from 
> actor to something else without touching the domain.
>
> On Thursday, 5 November 2015 22:58:01 UTC+5:30, Ryan Tanner wrote:
>>
>> That sounds like it may have the potential to violate actor 
>> encapsulation—actors should only be created by the ActorSystem itself.  It 
>> would depend on the exact implementation but that sounds worrisome to me.
>>
>> Why not just use composition?  Let your actors contain and manage 
>> business service objects.
>>
>> On Thursday, November 5, 2015 at 3:04:27 AM UTC-7, Chinmay Raval wrote:
>>>
>>> Hi,
>>>
>>> I am using Java 8 with Akka 2.3.12 for an enterprise application.
>>>
>>> To keep business logic and infrastructure separate, we are planning to 
>>> use java dynamic proxy that convert business services into akka actors in 
>>> the run time (with help of spring).
>>>
>>> I just want to confirm is this the right way to achieve separation or 
>>> there is some better way available?
>>>
>>> Thx,
>>> Chinmay
>>>
>>
-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Amazon Elastic Beanstalk and setting up seednodes

2014-12-15 Thread Daniel Stoner
I am trying to run an AKKA cluster on Amazon Elastic Beanstalk. A work in
progress at the moment, but I have noticed that I am suffering cluster
partitioning in various situations.

My current set up is:
Scaling group set to 4 static instances, with the potential to go up to 10
based on demand.

On application startup, query the scaling group of elastic beanstalk and
add all ACTIVE instances to the list of this servers seed nodes. (This list
will include the server querying).

The problem with this:
The documentation states that the first server to respond in the seed nodes
list will be the one used for current cluster state. This leads to cluster
partitioning when the first server is actually itself.

Simply removing yourself however (Except of course for the very first
server starting up), we believe can lead to situations where Elastic
Beanstalk will start 3 servers at the same time, and they can end up
believing only each other are their seed nodes - and not contacting any of
the older active cluster nodes.

We have considered trying to choose the oldest, or some arbitrary metric
that is consistent across servers - IE the lowest IP address, but would
like to know what other people do with regards populating their seed node
list in Amazon.

Thanks kindly,
Daniel Stoner

-- 
Daniel Stoner | Senior Software Engineer UtopiaIT | Ocado Technology
daniel.sto...@ocado.com | Ext 7969 | www.ocadotechnology.com

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Why my akka application is slow

2014-10-16 Thread Daniel Stoner
It may well not be the cause in your situation, but if you want to rule out 
serialisation troubles - turn on the following options in application.conf:

akka{
actor{
serialize-creators = on
serialize-messages = on
}
}

This will force your application to attempt to serialise everything that 
passes between actors - including Props definitions for the creation of 
actors (Where I spotted I'd put a non-serialisable Future inside my Props!).
Best to do this only for your tests, or at least only temporarily leave it 
on, put your application into 1 node only and scan your logs for ERROR 
messages.

I had the same situation - everything worked lighting fast with 1 node, 
tested our clustering and the systems were still running successfully but 
took extraordinary quantities of time. It all came down to a lot of 
messages failing serialisation and rolling back to the SQS queues we'd set 
up - only to eventually be consumed on the correct node.

Thanks,
Dan

On Wednesday, 15 October 2014 14:50:21 UTC+1, Shajahan Palayil wrote:

 Hi,

 I'm developing an akka based application using event-sourcing and CQRS. 
 I'm using the cluster sharding feature of the contrib module, use remote, 
 cluster modules and akka-persistence using jdbc persistence on PostgreSQL.

 Application has an actor hirarchy like in below diagram, where the top 
 level actor is cluster sharded and there is two levels of actors below that 
 (which are created using usual context.actorOf() mechanism).

 https://drive.google.com/open?id=0ByesuJQ6vK9idWJxalR1NXN6QU0authuser=0

 I tried to run some load tests on the application. Below are some findings,

 1. When running with only single node in the cluster, the application is 
 really fast.
 2. When the application is started with more nodes (than 1), request 
 processing get slower gradually for each request being processed.

 Initially my assumption was that it has to do with Java serialization and 
 network latency. But when I really looked at the logs I could find that 
 time taken for a message being sent from actor A to A/1 takes more than a 
 second in some cases, and in some other cases its not.
 Please keep in mind that there's no network overhead involved as A to A/1 
 messaging happens on the same node and is just parent-Child messaging

 Actors doesn't do much computation other than some simple business rules 
 and persisting the events generated from the command.

 Questions:

 Why is application slow when running on multiple nodes and fast on single 
 node.

 *Pictures from the profiler:*

 *JVM memory attributes:*

 https://drive.google.com/open?id=0ByesuJQ6vK9iOHVRSEE0Um51TDQauthuser=0

 *Thread status:*

 https://drive.google.com/open?id=0ByesuJQ6vK9idFM2U3dReGhKODgauthuser=0

 https://drive.google.com/open?id=0ByesuJQ6vK9iSHJLU2F4NUt2WG8authuser=0

 *OS Attributes:*

 https://drive.google.com/open?id=0ByesuJQ6vK9iVzZGSnNiVTJUaHMauthuser=0


 Appreciate any pointers in right direction.

 Thanks,
 Shajahan.


-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group.

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses.  

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.