[akka-user] Re: Distributed version of Akka streams

2015-11-17 Thread Soumya Simanta


On Tuesday, November 17, 2015 at 5:02:18 PM UTC+5:30, 
mathe...@sagaranatech.com wrote:
>
> Today you can use akka streams + kafka to create a distributed system. 
> This is not a perfect solution but works well.
>
>
Is the basic idea to create akka-streams Flows and deploy them on different 
physical nodes connected using Kafka? Is there a reference implementation 
somewhere I can check? 

Thanks
-Soumya


 

> Em terça-feira, 17 de novembro de 2015 05:52:47 UTC-3, Soumya Simanta 
> escreveu:
>>
>> Is a distributed version of Akka streams planned anytime in the near 
>> future? 
>>
>> Thanks
>> -Soumya
>>
>>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Distributed version of Akka streams

2015-11-17 Thread Soumya Simanta
Is a distributed version of Akka streams planned anytime in the near 
future? 

Thanks
-Soumya

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How to handle long GC in AKKA Actor model system

2015-05-18 Thread Soumya Simanta



 @Soumya, I didn't understand your question. Based on my understanding I am 
 answering, if you have terribly huge (not very huge it is just 1 TB) heap 
 space then JVM is very tired to clean it up.  

 My question is - how do you know that the GC is pause is that long. Have 
you tried using VisualVM/YourKit to profile your JVM. 
Can you try with a smaller heap size. Say 64G. How much actual RAM do you 
have on your box ? 
 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How to handle long GC in AKKA Actor model system

2015-05-18 Thread Soumya Simanta



 So when I say this, keep in mind I am java guy who is pretty new to akka 
 and scala, so this isn't expert advice. :)   But tuning akka doesn't seem 
 like the appropriate place for this.  Either there is something in your 
 application layer that requires this massive heap that could be 
 re-evaluated or there is tuning you can do at the JVM layer that well help 
 it handle GCs better.

 I agree. If this is a GC issue one needs to carefully look at the 
application. But still a GC pause of 30 minutes looks like an eternity to 
me. No impossible but very unlikely IMO. 
 

 I'm going to guess the majority of that heap is living in swap and that's 
 why your GCs are so long?

 Interesting. Lets see how much physical RAM is on the machine. 

 

 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] End-to-End Reactive Streaming RESTful service (a.k.a. Back-Pressure over HTTP)

2015-05-18 Thread Soumya Simanta




 The back pressure is propagated to the client thanks to TCPs built in 
 mechanisms for this - on the server side we simply do not read from the 
 socket
 until demand is available, which causes the back pressure to be propagated 
 properly.

 Konrad, 

So if we are *not* using a congestion control aware protocol such as TCP, 
the back pressure won't work propagate though network boundaries. Correct? 
Is there a way to build this easily using Akka-streams/reactive-streams ? 

Thanks
-Soumya

 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: The best ways to resolve future inside an actor?

2015-05-16 Thread Soumya Simanta
Having a Await.result (blocking) inside your receive method will kill your 
performance. 

On Thursday, May 14, 2015 at 2:19:30 PM UTC-4, Andrew Gaydenko wrote:

 On Thursday, May 14, 2015 at 8:46:27 PM UTC+3, Patrik Nordwall wrote:

 If there are no relationship (no ordering) between the future result and 
 other incoming messages you can just use pipe, without stash.


 Let's assume at the moment we have:

 def receive = {
   case Msg(data) =
 def job = callReturningFuture(data)(context.dispatcher)
 Await.result(job, 1000.millis)
 }

 What is the suggestion?



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Closing over java.util.concurrent.ConcurrentHashMap inside a Future ?

2015-04-25 Thread Soumya Simanta
@Jeroen - Thanks. I've not looked at agents yet. 

On Saturday, April 25, 2015 at 9:45:04 AM UTC-4, Jeroen Gordijn wrote:

 Hi Soumya, 

 Do agents cover your usecase? 
 http://doc.akka.io/docs/akka/snapshot/scala/agents.html 

 Cheers, 
 Jeroen

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Closing over java.util.concurrent.ConcurrentHashMap inside a Future ?

2015-04-25 Thread Soumya Simanta
I've an actor where I want to store my mutable state inside a map.

Clients can send Get(key:String) and Put(key:String,value:String) messages 
to this actor. 

I'm considering the following options. 

1. Don't use futures inside the Actor's receive method. In this may have a 
negative impact on both latency as well as throughput in case I've a large 
number of gets/puts. because both will be performed in sequence. 
2. Use java.util.concurrent.ConcurrentHashMap and then invoke the gets and 
puts inside a Future. 


Given that java.util.concurrent.ConcurrentHashMap is thread-safe, I was 
wondering if it is still a problem to close over the concurrentHashMap 
inside a Future created for each put and get. 

PS: I'm aware of the fact that it's a really bad idea to close over mutable 
state inside  a Future inside an Actor but I'm still interested to know if 
in this particular case it is correct or not? 




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Congrats to the Akka team and community for the JAX Award

2015-04-23 Thread Soumya Simanta
https://www.typesafe.com/blog/akka-wins-2015-jax-award-for-most-innovative-open-technology

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka as the potential API-push microservices choice

2015-01-19 Thread Soumya Simanta


On Monday, January 19, 2015 at 12:22:26 PM UTC-5, Ashesh Ambasta wrote:

 I will, however add a remark about my experience with Akka support until 
 now – I posted my question on this group only to have it up after about 10 
 hours, if I'm not wrong. I understand that you guys want to keep the spam 
 out of the group, but that is a bit of a deal breaker. That response time 
 is too long, just for support. 

 Strange. I've never seen this happen to me on this group. However, 
sometimes it happens on other Google groups (and some of them are not even 
moderated). The Akka team is usually really good in providing responses 
whenever they can. Just imagine this, if you were working on your project 
and people who use your software ask you respond fast and for free how 
would you respond :-). 

I recommend you cross post here as well as on Stackoverflow.com (with the 
tag akka) to get better coverage. 

 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Distributed cache with Akka

2015-01-19 Thread Soumya Simanta
I would recommend using Redis based on personal experience. The current 
stable version is not distributed but the new 3.0 version that will be out 
in a few weeks supports clustering. 
https://groups.google.com/forum/#!topic/redis-db/_DqcFW8EAOA

There are many Scala API for connecting to Redis. 

Here are some other options. I personally don't have much experience with 
these. 
MapDB http://www.mapdb.org/
Hazelcast http://hazelcast.com/
Chronicle Map http://openhft.net 



On Monday, January 19, 2015 at 8:42:09 PM UTC-5, as...@indexia.co wrote:

 Hey,

 I'm trying to build a small akka app that supports authentication via 
 tokens,

 What would be the simplest approach to store user tokens in a distributed 
 map cache for all akka nodes 
 as I can't guarantee that users will be served by the same akka node,


 Thanks.


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka as the potential API-push microservices choice

2015-01-19 Thread Soumya Simanta


On Monday, January 19, 2015 at 11:00:03 AM UTC-5, Ashesh Ambasta wrote:

 Thank you for the time you've put in for the detailed replies Soumya  As. 
 Well, I think your reply has me convinced and the more research I put in 
 for Akka, the more ideal it seemed for our use case. In short, our 
 architecture is like this:

Glad to help. 
 


 *CORE API *(Play + Scala) – *PUSH* (Akka + Scala) ... and some other 
 microservices. What is a little concerning is the fact that we would like 
 to pass around model class objects to the Push service, and these should be 
 the same as in the Core. We don't want to replicate any kind of code, and 
 only the API data pushing logic should reside in the Push service.

If you need to pass data from core to your push API I don't see how you can 
do that without passing your model classes. 
I think you avoid code replication by sharing repositories (your model 
classes) if nothing changes between layers. 

 


 Moreover, I'm yet to be clear on how Akka remoting works. The Push service 
 shall reside on a separate machine altogether and the Core service should 
 be able to talk to it over the network. It seems a bit magical that you can 
 reference actors from across the network without actually *import*ing 
 anything. If I can get more clarity on that, it'll be helpful.

I think you will find all your answers in the Akka 
documentation. http://doc.akka.io/docs/akka/snapshot/scala/remoting.html

Good luck. 

-Soumya

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka as the potential API-push microservices choice

2015-01-17 Thread Soumya Simanta
Ashesh, 

Akka is *Akka is a toolkit and runtime for building highly concurrent, 
distributed, and resilient message-driven applications on the JVM*. So 
short answer to all your questions is YES you can make Akka do all these 
things. 

Please see inline for my other responses. I'm sure Akka team can provide 
better guidance and correct me when I'm wrong :-)


On Friday, January 16, 2015 at 5:48:10 PM UTC-5, Ashesh Ambasta wrote:

 I had a quick question: I'm a lead developer at a small startup and we're 
 working on a solution that depends on the data in our core service to push 
 out periodic updates to other API's (like Facebook, Twitter, etc.) and we 
 want to build a microservices arch. We intend to keep the core service and 
 the data-push service across multiple instances. Is Akka going to be a good 
 choice for a problem like this?

 We've built our core API using Scala and the Play Framework and we're 
 happy with the outcomes. And while we were working on the core, we began to 
 look into Akka and it looked quite interesting, given the buzz it has 
 around the web and the success stories.

 We will, however, need some convincing about a service like the one I 
 mentioned. What we will definitely need from a system like this is;


- The system should process push requests concurrently: other requests 
shouldn't be blocked if the remote API is taking too long to respond.

 Akka supports both push and pull. The exact mechanism you are going to use 
to talk to your remote API will determine the Akka abstraction/extension 
you are going to use. For example, 
- if it is HTTP then you can use spray.io (or the new akka-http module). 
- if the remote service is akka as well, you can use akka-clustering, 
- if you want low latency you can use akka-zeromq or some other mechanism
...


- Errors should be logged and reported, and failed requests should be 
queued back.

  Akka really shines at handling faults. Fault handling is a first class 
citizen in the Akka world. In fact, it is one of the main selling points of 
Akka when compared to other distributed computing middleware platforms. 
Again, the exact logic of handling faults is business specific and needs be 
engineered for your domain/system. 


- We're talking about potentially hundreds of requests per minute in 
the beginning, we've heard good things about Akka's performance, but we're 
yet to come across a similar use case to be sufficiently convinced.

 If you *configure Akka properly *you can scale it to *orders of magnitude 
more* than 100s of request per minute.  Throughput and latency are also a 
function of how good your pipeline is. If you avoid/localize blocking and 
keep it asynchronous Akka will give you great performance. It requires some 
effort and engineering to get there though. Since Akka can scale 
horizontally using akka-clustering I would strongly recommend that you 
design your application keeping this in mind. 
You can find examples on TypeSafe's website or just Google for it. 


- It should be possible for the core service to communicate with a 
remotely running Akka service. For example; the Core API runs on some 
instance, and the Akka services run on some other instances. A push 
 request 
is initiated in an action of the Core service – where a push request is 
sent to Akka. The appropriate service for that API should pick that up and 
send it to the target API and report back with a confirmation.

 As I mentioned before, there are many ways of doing this because Akka is 
so flexible. If both sides are Akka you can just use Akka clustering. If 
you want loose coupling and don't care about extra latency you can use 
Spray/Akka-http and if you want really low latency you use a custom 
transport. 
 

 I'm quite keen on diving into Akka for a project like this, but as I said 
 before, I'll need some convincing.

I hope the above has helped. If you have any more specific concerns please 
let us know. I've been working with different parts of Akka for almost 2 
years now. It's amazing how you can write concurrent abstractions that just 
works once you figure out how to play in this new world. 

On the other hand, Akka is not a silver-bullet and you can find lots of 
examples where other approaches are either simpler or better than Akka. But 
for the use case you described above I think Akka is a great fit. 

Good luck. 

Cheers,
-Soumya


 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at 

[akka-user] Re: PersistentActor maintenance and growth.

2015-01-11 Thread Soumya Simanta
I'm not an expert and still learning how to design systems using this 
approach. 

Here are some suggestions: 

1. If you design your domain using DDD properly you will end up with a 
reasonable set of aggregates, entities and value objects. You should design 
your events to mirror your business processes. I personally feel DDD 
requires more investment and skills but if you get it right you can avoid a 
lot of complexity afterwards. 

2. Even if your core events don't change you may need to add information 
later which effectively changes your event resulting in a new version. 
Something like this is likely to happen at initial stages of your project 
when the business concepts are are still in flux. One way to address this 
is to replay the old events from start and create a new set of events with 
the new version. 

I would also be very interested to hear what others have to say on this.

-Soumya


On Sunday, January 11, 2015 at 1:56:01 PM UTC-5, Robert Budźko wrote:

 Hi,

 Recently, I've been playing with persistent actors a little bit. I've 
 decided to design my PoC in such a way that some actors are distributed 
 entities (state holders). Usually, not persistent actor supervises a group 
 of entity-actors (of the same type) and asks them to handle commands 
 (usually changing the state). I was delighted with this approach in the 
 first place, but now I doubt my design, because I've encountered problem of 
 migration when persistent actor is being developed/refactored, so both 
 commands and events are changed. Additionally, I'm a little bit unhappy 
 because of  serialization in form of blob.

 I got couple of ideas how to solve my problem but non of them is 
 convincing me 100%:
 1) Prepare custom serializer which is able to serialize my events into for 
 example relational model, so I can migrate database when implementation of 
 event is changed (benefit). In this case, I don't like the fact that I have 
 to add serialization implementation for each new agent.
 2) I was thinking about recognition of event version in serializer and 
 having all versions of events in classpath, so it will sustain backward 
 compatibility. I'm not even sure if it is possible (have not confirmed it 
 yet). I'm also afraid that code might become nasty after couple of versions.
 3) I wonder if it is possible to somehow snapshot state into latest 
 version so, after next start of node, old versions of events are not 
 required any more.

 Problems I'm trying to solve are:
 1) Possibility of migration of persisted state. (ie. executing sql in 
 relational database)
 2) Possibility of accessing state w/o deserialization into event class. 
 (ie. peeking into relational database).

 Do you know any patterns, approaches which could lead to solution of those 
 problems? Maybe my design is not valid in the end :-] .

 Thank you,
 Robert

 PS Relational database was used as an example of different storage then 
 default one. It can by any different storage.


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: How to Properly Manage Akka's Memory Usage with 40 Million Lines Task?

2015-01-10 Thread Soumya Simanta
Allen,

Here is one definition of back pressure
http://www.reactivemanifesto.org/glossary#Back-Pressure
For example, in your initial user question about memory errors the back
pressure mechanism  of akka-streams allows processing your data even with a
limited memory budget.

I have personally found this presentation (
https://www.youtube.com/watch?v=khmVMvlP_QA
) by Roland Kuhn very helpful in understanding the motivations and core
concepts behind akka-streams which is an implementation of reactive streams
( http://www.reactive-streams.org/).


-Soumya


On Sat, Jan 10, 2015 at 7:42 PM, Allen Nie aiming...@gmail.com wrote:

 Hi Endre,

 That's a very valid suggestion. I'm quite new to Akka (finished about
 35% of its docs). I'm still trying to understand how to properly
 parallelize tasks. You and Viktor mentioned back-pressure. Can you go a bit
 deeper in that. For example, what is back-pressure and how to build it into
 my actor solutions ? (Info links would be all I need). I asked a similar
 question like this on StackOverflow but no one could point me to the right
 direction.

 Thank you for linking Akka-stream's docs.

 Allen


 On Saturday, January 10, 2015 at 5:38:42 AM UTC-5, drewhk wrote:

 Hi,

 On Fri, Jan 9, 2015 at 10:53 PM, Allen Nie aimi...@gmail.com wrote:

 Hey Viktor,

 I'm trying to use Akka to parallelize this process. There shouldn't
 be any bottleneck, and I don't understand why I got memory overflow with my
 first version (actor version). The main task is to read in a line, break it
 up, and turn each segments (strings) into an integer, then prints it out to
 a CSV file (vectorization process).

def processLine(line: String): Unit = {

   val vector: ListBuffer[String] = ListBuffer()
   val segs = line.split(,)

   println(segs(0))

   (1 to segs.length - 1).map {i =
 val factorArray = dictionaries(i-1)
 vector += factorArray._2.indexOf(segs(i)).toString   //get the factor 
 level of string
   }

   timer ! OneDone

   printer ! Print(vector.toList)}


 When I'm doing this in pure Akka (with actors), since I created 40
 million objects: Row(line: String), I get memory overflow issue.


 No surprise there, you just slurp up all rows faster than the actors can
 keep up processing them, so most of them are in a mailbox. In fact if your
 actors do something trivially simple, the whole overhead of asynchronously
 passing elements to the actors might be larger than what you gain. In these
 cases it is recommended to pass batches of Rows instead of one-by-one.
 Remember, parallelisation only gains when the overhead of it is smaller
 than the task it parallelizes.



 If I use Akka-stream, there is no memory overflow issue, but the
 performance is too similar to the non-parallelized version (even slower).


 No surprise there either, you did nothing to parallelize or pipeline any
 computation in the stream, so you get the overhead of asynchronous
 processing and none of the benefits of it (but at least you get
 backpressure).

 You have a few approaches to get the benefints of multi-core processing
 with streams:
  - if you have multiple processing steps for a row you can pipeline them,
 see the intro part of this doc page: http://doc.akka.io/docs/
 akka-stream-and-http-experimental/1.0-M2/scala/stream-rate.html
  - you can use mapAsync to have similar effects but with one computation
 step, see here: http://doc.akka.io/docs/akka-stream-and-http-
 experimental/1.0-M2/scala/stream-integrations.html#
 Illustrating_ordering_and_parallelism
  - you can explicitly add fan-out elements to parallelise among multiple
 explicit workers, see here: http://doc.akka.io/docs/akka-stream-and-http-
 experimental/1.0-M2/scala/stream-cookbook.html#Balancing_jobs_to_a_fixed_
 pool_of_workers

 Overall, for this kind of tasks I recommend using Streams, but you need
 to read the documentation first to understand how it works.

 -Endre



 It's my first time using Akka-stream. So I'm unfamiliar with the
 optimization you were talking about.

 Sincerely,
 Allen

 On Friday, January 9, 2015 at 4:03:13 PM UTC-5, √ wrote:

 Hi Allen,

 What's the bottleneck?
 Have you tried enabling the experimental optimizations?

 On Fri, Jan 9, 2015 at 9:52 PM, Allen Nie aimi...@gmail.com wrote:

 Thank you Soumya,

I think Akka-streams is the way to go. However, I would also
 appreciate some performance boost as well - still have 40 million lines to
 go through! But thanks anyway!


 On Friday, January 9, 2015 at 12:43:49 PM UTC-5, Soumya Simanta wrote:

 I would recommend using the Akka-streams API for this.
 Here is sample. I was able to process a 1G file with around 1.5
 million records in *20MB* of memory. The file read and the writing
 on the console rates are different but the streams API handles that.  
 This
 is not the fastest but you at least won't run out of memory.



 https://lh6.googleusercontent.com/-zdX0n1pvueE/VLATDja3K4I/v18/BH7V1RAuxT8/s1600

[akka-user] Re: How to Properly Manage Akka's Memory Usage with 40 Million Lines Task?

2015-01-09 Thread Soumya Simanta
I would recommend using the Akka-streams API for this. 
Here is sample. I was able to process a 1G file with around 1.5 million 
records in *20MB* of memory. The file read and the writing on the console 
rates are different but the streams API handles that.  This is not the 
fastest but you at least won't run out of memory. 


https://lh6.googleusercontent.com/-zdX0n1pvueE/VLATDja3K4I/v18/BH7V1RAuxT8/s1600/1gb_file_processing.png

import java.io.FileInputStream
import java.util.Scanner

import akka.actor.ActorSystem
import akka.stream.{FlowMaterializer, MaterializerSettings}
import akka.stream.scaladsl.Source

import scala.util.Try


object StreamingFileReader extends App {


  val inputStream = new FileInputStream(/path/to/file)
  val sc = new Scanner(inputStream, UTF-8)

  implicit val system = ActorSystem(Sys)
  val settings = MaterializerSettings(system)
  implicit val materializer = 
FlowMaterializer(settings.copy(maxInputBufferSize = 256, 
initialInputBufferSize = 256))

  val fileSource = Source(() = Iterator.continually(sc.nextLine()))

  import system.dispatcher

  fileSource.map { line =
line //do nothing
  //in the for each print the line. 
  }.foreach(println).onComplete { _ =
Try {
  sc.close()
  inputStream.close()
}
system.shutdown()
  }
}




On Friday, January 9, 2015 at 10:53:33 AM UTC-5, Allen Nie wrote:

 Hi,

 I am trying to process a csv file with 40 million lines of data in 
 there. It's a 5GB size file. I'm trying to use Akka to parallelize the 
 task. However, it seems like I can't stop the quick memory growth. It 
 expanded from 1GB to almost 15GB (the limit I set) under 5 minutes. This is 
 the code in my main() method:

 val inputStream = new 
 FileInputStream(E:\\Allen\\DataScience\\train\\train.csv)val sc = new 
 Scanner(inputStream, UTF-8)
 var counter = 0
 while (sc.hasNextLine) {

   rowActors(counter % 20) ! Row(sc.nextLine())

   counter += 1}

 sc.close()
 inputStream.close()

 Someone pointed out that I was essentially creating 40 million Row 
 objects, which naturally will take up a lot of space. My row actor is not 
 doing much. Just simply transforming each line into an array of integers 
 (if you are familiar with the concept of vectorizing, that's what I'm 
 doing). Then the transformed array gets printed out. Done. I originally 
 thought there was a memory leak but maybe I'm not managing memory right. 
 Can I get any wise suggestions from the Akka experts here??

 

 http://i.stack.imgur.com/yQ4xx.png



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: How to Properly Manage Akka's Memory Usage with 40 Million Lines Task?

2015-01-09 Thread Soumya Simanta
Allen,

What are your constraints ? Does the output CSV have to maintain the order 
of the input file ? Do you have an upper bound ?

I don't think you are CPU bound so you need to look at ways of 
reading/writing faster. Maybe async IO using nio can help. 
You can split the input and process in parallel if you don't mind multiple 
output files.  

I'm not aware of any way of doing file IO faster using Akka. Maybe the Akka 
folks can provide better guidance there. 

BTW how much memory are you giving your akka-streams program? 

-Soumya


On Friday, January 9, 2015 at 4:53:29 PM UTC-5, Allen Nie wrote:

 Hey Viktor,

 I'm trying to use Akka to parallelize this process. There shouldn't be 
 any bottleneck, and I don't understand why I got memory overflow with my 
 first version (actor version). The main task is to read in a line, break it 
 up, and turn each segments (strings) into an integer, then prints it out to 
 a CSV file (vectorization process).

def processLine(line: String): Unit = {

   val vector: ListBuffer[String] = ListBuffer()
   val segs = line.split(,)

   println(segs(0))

   (1 to segs.length - 1).map {i =
 val factorArray = dictionaries(i-1)
 vector += factorArray._2.indexOf(segs(i)).toString   //get the factor 
 level of string
   }

   timer ! OneDone

   printer ! Print(vector.toList)}


 When I'm doing this in pure Akka (with actors), since I created 40 
 million objects: Row(line: String), I get memory overflow issue. If I use 
 Akka-stream, there is no memory overflow issue, but the performance is too 
 similar to the non-parallelized version (even slower).

 It's my first time using Akka-stream. So I'm unfamiliar with the 
 optimization you were talking about.

 Sincerely,
 Allen

 On Friday, January 9, 2015 at 4:03:13 PM UTC-5, √ wrote:

 Hi Allen,

 What's the bottleneck?
 Have you tried enabling the experimental optimizations?

 On Fri, Jan 9, 2015 at 9:52 PM, Allen Nie aimi...@gmail.com wrote:

 Thank you Soumya, 

I think Akka-streams is the way to go. However, I would also 
 appreciate some performance boost as well - still have 40 million lines to 
 go through! But thanks anyway!


 On Friday, January 9, 2015 at 12:43:49 PM UTC-5, Soumya Simanta wrote:

 I would recommend using the Akka-streams API for this. 
 Here is sample. I was able to process a 1G file with around 1.5 million 
 records in *20MB* of memory. The file read and the writing on the 
 console rates are different but the streams API handles that.  This is not 
 the fastest but you at least won't run out of memory. 



 https://lh6.googleusercontent.com/-zdX0n1pvueE/VLATDja3K4I/v18/BH7V1RAuxT8/s1600/1gb_file_processing.png

 import java.io.FileInputStream
 import java.util.Scanner

 import akka.actor.ActorSystem
 import akka.stream.{FlowMaterializer, MaterializerSettings}
 import akka.stream.scaladsl.Source

 import scala.util.Try


 object StreamingFileReader extends App {


   val inputStream = new FileInputStream(/path/to/file)
   val sc = new Scanner(inputStream, UTF-8)

   implicit val system = ActorSystem(Sys)
   val settings = MaterializerSettings(system)
   implicit val materializer = 
 FlowMaterializer(settings.copy(maxInputBufferSize 
 = 256, initialInputBufferSize = 256))

   val fileSource = Source(() = Iterator.continually(sc.nextLine()))

   import system.dispatcher

   fileSource.map { line =
 line //do nothing
   //in the for each print the line. 
   }.foreach(println).onComplete { _ =
 Try {
   sc.close()
   inputStream.close()
 }
 system.shutdown()
   }
 }




 On Friday, January 9, 2015 at 10:53:33 AM UTC-5, Allen Nie wrote:

 Hi,

 I am trying to process a csv file with 40 million lines of data in 
 there. It's a 5GB size file. I'm trying to use Akka to parallelize the 
 task. However, it seems like I can't stop the quick memory growth. It 
 expanded from 1GB to almost 15GB (the limit I set) under 5 minutes. This 
 is 
 the code in my main() method:

 val inputStream = new 
 FileInputStream(E:\\Allen\\DataScience\\train\\train.csv)val sc = new 
 Scanner(inputStream, UTF-8)
 var counter = 0
 while (sc.hasNextLine) {

   rowActors(counter % 20) ! Row(sc.nextLine())

   counter += 1}

 sc.close()
 inputStream.close()

 Someone pointed out that I was essentially creating 40 million Row 
 objects, which naturally will take up a lot of space. My row actor is not 
 doing much. Just simply transforming each line into an array of integers 
 (if you are familiar with the concept of vectorizing, that's what I'm 
 doing). Then the transformed array gets printed out. Done. I originally 
 thought there was a memory leak but maybe I'm not managing memory right. 
 Can I get any wise suggestions from the Akka experts here??

 

 http://i.stack.imgur.com/yQ4xx.png

  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: 
 https

Re: [akka-user] [akka-persistance] - Question about the example in documentation

2015-01-02 Thread Soumya Simanta
Thank you Konard. I think a simpler example would help, esp. if its the 
first example in the documentation. 

I'll submit a pull request for this soon. 

-Soumya


On Friday, January 2, 2015 3:24:45 AM UTC-5, Konrad Malawski wrote:

 Hi there,
 Your reasoning is correct - you usually do not keep all events in the 
 actor. This is just a simple example, not a full blown app. Maybe a better 
 example would be a summing actor here, as it does not have to keep all the 
 state.

 Feel free to open a ticket on akka/akka on github or submit a pull request 
 making this more clear :-)

 -- 
 Konrad 'ktoso' Malawski
 (sent from my mobile)
 On 2 Jan 2015 07:27, Soumya Simanta soumya@gmail.com javascript: 
 wrote:

 I'm very new to Akka persistence, so please pardon me if this is a really 
 simple question.

 In this example (part of the documentation 
 http://doc.akka.io/docs/akka/snapshot/scala/persistence.html). Example 
 state is defined as following. 

 case class ExampleState(*events: List[String]* = Nil) {
   def updated(evt: Evt): ExampleState = copy(evt.data :: events)

   def size: Int = events.length

   override def toString: String = events.reverse.toString
 }


 events is a List[String] of all the events till *now*. Now imagine if 
 the state of this actor was updated *a million times* then events will 
 be a List of million Strings (which may put memory pressure in case of a 
 large string values) 

 My understanding is that we *don't* need to keep all the events inside 
 the current state of the PersistentActor. It is stored on the journal (and 
 snapshot) and can be replayed by the PersistantActor or ViewActor to 
 recreate the current state in case of a crash or to create a view. 

 Again, I understand in this example, ALL events are stored in the current 
 state. 
 But I don't see using this pattern (of storing all the events data in the 
 current state) to represent state for real use cases.  


 Thanks
 -Soumya


  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Recording latency of operations that return a Future inside an akka stream

2015-01-01 Thread Soumya Simanta
Endre, 

Thank you for responding. 
I was trying to do something like this. However, I'm not sure if what you 
have mentioned above will work because the Redis set API returns a 
Future[Boolean]. So in the map after my mapAsyncUnordered I've no reference 
to the operation that finished before that unless the Redis returns a 
reference in the response to each call. Please let me know if this is not 
correct. 

However, if I use mapAsync the order should be maintained correct ? Do you 
have any idea of how much performance difference is between mapAsync and 
mapAsyncUnordered ? I can maintain two lists (before and after). I'm yet to 
try the other approach that Konard pointed. 

What I'm learning is - doing things asynchronously is fast and fun BUT 
requires a different mindset :-) 

Thanks. 
-Soumya





On Thursday, January 1, 2015 3:48:54 AM UTC-5, Akka Team wrote:

 Hi,

 One pitfall in your simple map approach is that you might measure the 
 wrong values. Remember that mapAsyncUnordered is, well, unordered and 
 batching, so you cannot expect that the elements come out in the same order 
 as they entered. On approach would be to record the start time of 
 elements in a concurrent Map in a map stage in front of the redis write, 
 then record the end time of elements in a map stage after, reading from 
 the Map of start times. 

 The drawback of the above approach is that on top of what you want to 
 measure, you get overhead from:

  - using a concurrent map
  - adding two map stages

 The measurement pseudocode can look like this:

 .map{ id = putInMap(id, currentTime); id }
 .mapAsync(...)
 .map { id = currentTime - getFromMap(id) }
 .fold(0.0)(_ + _)
 .map(_ / NumberOfElements)
 // Now you have a Future of the time average.

 Of course you can also just collect the measurements in a Seq in the fold 
 instead of averaging, and then you can do whatever analysis you want after.

 -Endre

 On Sun, Dec 28, 2014 at 2:50 AM, Soumya Simanta soumya@gmail.com 
 javascript: wrote:

 Konrad, 

 Thank you for pointing me the correct direction. I'll give it a shot.

 -Soumya


 On Saturday, December 27, 2014 7:48:30 AM UTC-5, Konrad Malawski wrote:

 Hi Soumya,

 I don’t think what you’ll end up measuring this way will be very useful. 
 I mean, between the completion of the future and the triggering of the map 
 there are multiple asynchronous boundaries… So you won’t be measuring how 
 fast the set operation was, but how much time was between these 
 asynchronous boundaries - which could have been backpressured by the way.


 I suggest directly wrapping the set call with your measurement logic 
 instead - since that is what you want to measure it seems.


 By the way, we do have a “timed” element, in our extras section: 
 https://github.com/akka/akka/blob/release-2.3-dev/akka-
 stream/src/main/scala/akka/stream/extra/Timed.scala You can `import 
 Timed._` and then use it as shown here: https://github.com/akka/akka/
 blob/release-2.3-dev/akka-stream-tests/src/test/scala/akka/stream/extra/
 FlowTimedSpec.scala

 It’s a rather old element and I’m not sure if we’ll be keeping it, but 
 you can use it as a source of inspiration in case you end up needing that 
 kind of measurement.




 On 26 December 2014 at 05:46:55, Soumya Simanta (soumya@gmail.com) 
 wrote:
   
  This is related to this 
 https://groups.google.com/forum/#!topic/akka-user/NrSkEwMrS3s thread 
 but sufficiently different that I decided to create new thread. Hope that's 
 okay. 

  I would like to create a histogram of latency of a large number of set 
 operations 
 ( set returns a Future[Boolean]) using LatencyUtils 
 https://github.com/LatencyUtils/LatencyUtils 

  Basically I need to start recording the time before the set operation 
 (inside mapAsyncUnordered(k = redis.set(k + rnd, message))) and then 
 somehow record the end time in a map operation( .map( //record the end 
 time here) after this. I'm having a hard time trying to figure this 
 out. 

 My understanding is that the even though the mapAsyncUnordered doesn't 
 maintain the order of operations the map following the mapAsynUnordered 
 will 
 maintain the order from the previous stage because of TCP maintaining the 
 order. Is this correct? 


  val redis = RedisClient(localhost)

 val random1 = UUID.randomUUID().toString

  def insertValues(rnd: String): Flow[Int, Boolean] = {
 Flow[Int].mapAsyncUnordered(k = redis.set(k + rnd, message)).map( 
 //record the end time here)
 }
  
 val blackhole = BlackholeSink

   val maxSeq = 500
 val seqSource = Source( () = (1 to maxSeq).iterator )
  *val streamWithSeqSource = 
 seqSource.via(insertValues(random1)).runWith(blackhole)*

  
  --
  Read the docs: http://akka.io/docs/
  Check the FAQ: http://doc.akka.io/docs/akka/
 current/additional/faq.html
  Search the archives: https://groups.google.com/
 group/akka-user
 ---
 You received this message because you are subscribed to the Google 
 Groups Akka User List group

[akka-user] [akka-persistance] - Question about the example in documentation

2015-01-01 Thread Soumya Simanta
I'm very new to Akka persistence, so please pardon me if this is a really 
simple question.

In this example (part of the documentation 
http://doc.akka.io/docs/akka/snapshot/scala/persistence.html). Example 
state is defined as following. 

case class ExampleState(*events: List[String]* = Nil) {
  def updated(evt: Evt): ExampleState = copy(evt.data :: events)

  def size: Int = events.length

  override def toString: String = events.reverse.toString
}


events is a List[String] of all the events till *now*. Now imagine if the 
state of this actor was updated *a million times* then events will be a 
List of million Strings (which may put memory pressure in case of a large 
string values) 

My understanding is that we *don't* need to keep all the events inside the 
current state of the PersistentActor. It is stored on the journal (and 
snapshot) and can be replayed by the PersistantActor or ViewActor to 
recreate the current state in case of a crash or to create a view. 

Again, I understand in this example, ALL events are stored in the current 
state. 
But I don't see using this pattern (of storing all the events data in the 
current state) to represent state for real use cases.  


Thanks
-Soumya


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Design of Persistent Actor for CQRS

2014-12-31 Thread Soumya Simanta


On Wednesday, December 31, 2014 12:49:06 PM UTC-5, Greg Young wrote:

 To be fair if you are selling 100k distinct items per second of the same 
 product you will most likely have much larger operational issues than 
 making sure it's consistent. I'm imagining your delivery and warehousing 
 infrastructure (physical not software). Of course I'd imagine at 100k sales 
 per second of the same product you will also have a substantial budget to 
 deal with the relatively minor software issues that may arise. 


Completely agree. Maybe I didn't make it clear. What I'm trying to do a 
toy prototype to get a better understanding of how akka-persistence, 
CQRS/ES and DDD can replace traditional solutions and what are the limits. 
While 100K is unlikely to happen in this domain, it may appear in other 
software only domains.   

 

 Greg

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Design of Persistent Actor for CQRS

2014-12-30 Thread Soumya Simanta
Andrew, 

Many thanks for your time and detailed response. This is very helpful as it 
gives me the direction I want to proceed. 

I'm aware that with CQRS I get eventual consistency. The scenario and 
requirements are artificial to provide a working use case. My main goal is 
see how far can I take this idea in terms of scale and performance. I'll 
report back with  findings and numbers once I've a working version. 

Thanks again ! 

-Soumya
 

On Tuesday, December 30, 2014 6:37:41 AM UTC-5, Andrew Easter wrote:

 Firstly, your proposal is to use event sourcing and CQRS. Using 
 akka-persistence to implement this approach necessitates you being 
 comfortable with eventual consistency - i.e. there's no guarantee that your 
 views (the Q in CQRS) will be up to date with the domain model (stored as 
 events with current state represented within your persistent actor).

 In a majority of cases, eventual consistency is completely tolerable, 
 despite one's initial thought to be that the write model and the view model 
 must be completely in sync. There are definitely some ways you can tolerate 
 eventual consistency given your requirements.

 For the aggregate representing the flash sale, you can view an incoming 
 command as an attempt to reserve an item for a specified user/cart. As your 
 actor will only process one command at a time, you can be sure that you 
 will not commit to sell beyond the quantity available. So, I think 
 modelling each flash sale as a single persistent actor can work for you, 
 and, from my understanding, there's reason to be confident that this 
 wouldn't be a critical bottleneck given the number of potential users you 
 talk about.

 You have a couple of options with regard to what you do about notifying a 
 user as to whether they've successfully reserved an item. You can either 
 return a message back from your persistent actor to acknowledge 
 success/failure, and/or you can implement a true CQRS approach where some 
 view model is updated (in response to a generated event - e.g. ItemReserved 
 or ItemNotReserved) that the query side can look up to determine 
 success/failure. You'll probably need to give some thought to the idea that 
 an item reservation is time limited, i.e. has an expiry, such that it can 
 be released back if the user doesn't actually complete checkout within the 
 allotted time.

 With this in mind, I'd be inclined to think about introducing a specific 
 domain concept to represent an item reservation, storing those details 
 within the actor state, rather than just having a single quantity that's 
 decremented. Then, on the view side, your reservations view model can be 
 queried to see whether an item was successfully reserved, who it was 
 reserved for, and include details of when the reservation will expire. You 
 could correlate a reservation back with an originating command by using 
 some kind of unique id that was passed in with the command. I'd also advise 
 keeping a separate view model as a high level overview of the overall flash 
 sale, including quantity of the item remaining - this view would listen to 
 all events generated via the flash sale actor, e.g. 
 FlashSaleCreated(quantity: Int), ItemReserved, ItemsSoldOut etc. You could 
 run a periodic job within the persistent actor to clear up expired 
 reservations = i.e. if a reservation has expired, send a 
 ExpireItemReservation command to self, and allow it to be processed, 
 leading to the quantity being incremented and an ItemReservationExpired 
 event. Something along those lines, anyway!

 As far as the UI showing a Sold Out button, this is where eventual 
 consistency is perfectly tolerable for you (in my opinion). You obviously 
 wouldn't tie the true availability of an item to the state of the UI 
 button. It doesn't really matter if, for some users, the button appears 
 enabled but their reservation still ultimately fails once they click it. 
 You just have to ensure you message appropriately to explain that, 
 unfortunately, the item was sold out after all.

 If you haven't already looked into the awesome Akka ClusterSharding 
 extension, then I recommend you do so. You'll see how you could scale out 
 many concurrent flash sales across a whole cluster.

 One problem you're maybe likely to run into is the current state of the 
 PersistentView in akka-persistence. It doesn't currently support projecting 
 a view across multiple persistent actors of the same 'type', thus making it 
 more difficult to achieve a true CQRS implementation. The issue here is 
 that you'd typically want, in your case, a view to project data from 
 multiple flash sales. There are plans to introduce this feature in 2015. 
 You just need to be aware that, currently, you'd have to find alternative 
 ways to achieve this functionality until it's available within the core of 
 akka-persistence.

 Hope some of this made some sense to you?!



 On Tuesday, 30 December 2014 02:20:05 UTC, Soumya Simanta wrote

[akka-user] Re: Design of Persistent Actor for CQRS

2014-12-30 Thread Soumya Simanta
Greg,

Thank you for responding. I wanted some validation about my design and the 
selection of my components for the implementation. 
Based on Andrew's and your response  I assume that my basic domain model is 
correct and I'm moving in the correct direction.  

On Tuesday, December 30, 2014 1:03:35 PM UTC-5, Greg Young wrote:

 What exactly is the issue with a single actor controlling an inventory 
 count? 

Given that I'm new to akka-persistance and CQRS/ES, I was not sure if this 
is going to be an issue or not. So just wanted to get some idea before I 
started with implementing it. 
 

 Have you benchmarked any of this or are just getting prematurely wprried? 
 A quick benchmark here was over 500k/second. With persistence naively was 
 still 100k.

Not yet. I'm working on it right now and will report back once I've some 
real numbers. Having a quick comparative benchmark really helps. Can you 
tell which datastore you used for the journal and the snapshot. 
 

 You must have a huge number of users to get 100k/s items added to carts! 

Correct. I agree, 100k/s is a big number and going beyond that is unlikely. 
 If it goes beyond 100K/s then we can add additional constraints. 
For example 
- having sub inventories that get more items from a master inventory on 
demand. In this case, each sub-inventory actor will be a persistent actor.
- by dividing users by geography 

thanks again for pitching in. 

-Soumya

 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Design of Persistent Actor for CQRS

2014-12-29 Thread Soumya Simanta
I'm trying to build a prototype using DDD/CQRS and do some bench marking.

Here are the details of the scenario (flash sale capability) I want to 
model. 

1. The primary constraint is that there are limited quantity of each item 
(only 1000)  
2. Invitation for the flash sale is sent to  100K users 
3. 20K invited users log in at the same time to buy the item at a given 
time say 10 AM. 
4. Only 1000 of these 20K users should be able to buy the item. 
5. To keep it simple I want to start with only one item. 
6. All this can happen in the* order of few seconds.* 


I'm creating an AggregateRoot (an PersistentActor) called 
ItemInventoryAggregate. This Actor *enforces* the check that *an item can 
only be added to cart as long as there is available quantities left*. My 
concern is that this check is there in only a *single* actor thereby making 
it a bottleneck (?). I want to verify if I'm on the correct track.  Will 
this approach scale or do I need to rethink my design? 

One the view side I want to make sure that as soon as item is unavailable I 
want to show a soldout button on the UI. As you can see all 1000 items 
can be sold out in 1-2 seconds. Will the view side be consistent (i.e., it 
will be see all items are sold out) in this window of time ?   



object ItemInventoryAggregate {


  //state
  case class ItemQuantity(quantity: Int = 0) extends State

  //commands
  case class AddToInventory(quantity: Int) extends Command

  case object AddItemToCart extends Command

  case object RemoveItemFromCart extends Command

  //events
  case class ItemsAddedToInventory(quantity: Int) extends Event

  case object ItemAddedToCart extends Event

  case object ItemRemovedFromCart extends Event

  def props(id: String, name: String): Props = 
Props[ItemInventoryAggregate](new ItemInventoryAggregate(id, name))

}


...
  override def updateState(evt: AggregateRoot.Event): Unit = evt match {
case ItemsAddedToInventory(newItemsQty: Int) =
  if (newItemsQty  0) {
state = ItemQuantity(newItemsQty)
context.become(created)
state match {
  case s: ItemQuantity = state = s.copy(quantity = s.quantity + 
newItemsQty)
  case _ = //nothing
}
  }
case ItemAddedToCart =
  state match {
case s: ItemQuantity = if (s.quantity  0) state = s.copy(quantity 
= s.quantity - 1)
case _ = //nothing
  }
case ItemRemovedFromCart =
  state match {
case s: ItemQuantity = state = s.copy(quantity = s.quantity + 1)
case _ = //nothing
  }
  }
...


I'm planning to use akka, spray/akka-http, akka-persistence, play2, 
cassandra-journal for implementing this. 
My understanding is that each of these components is quite performant and 
can be used to achieve the requirements I've outlined above. 



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Is there any need to remove message received from message queue?

2014-12-29 Thread Soumya Simanta
Krishna,

It looks like your consumers cannot keep up with your faster producer(s). 

Some things to look at: 
1. Make sure none of your actors are blocking. You can use VisualVM or 
YouKit to figure this out quickly.
2. If your actors are not blocking, you can try and add more consumer 
actors (using an Akka router) and see if this helps
3. Finally, you can use the new reactive streams API to add back pressure 
to your pipeline. 


HTH
-Soumya


On Monday, December 29, 2014 11:22:27 PM UTC-5, Krishna Kadam wrote:

 Hi all,
I am using AKKA actors for my masters project, I have a data sender 
 which sends message to AKKA actors, while using akka actors my machine's 
 RAM usage gradually increases to maximum limit. Is there any need to 
 manually delete the message received in AKKA actor's message queue after 
 processing it in OnReceive() method?

 Thanks  Regards
 Krishna Kadam


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Is there any need to remove message received from message queue?

2014-12-29 Thread Soumya Simanta


On Tuesday, December 30, 2014 1:39:01 AM UTC-5, Krishna Kadam wrote:

 Hi Soumya,
  Please suggest, where can I find material related to 3rd 
 point 


http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-introduction.html
 

 and if in my example, blocking is a problem then, is there any way to 
 avoid this problem?

 Here are some ways that I've used to prevent blocking in my codebase and 
improve the performance of my actors in general. 

1. Use Futures whenever possible (Caveat : don't *close* over Actor's 
mutable state in the Actor's receive method) 
2. Use non-blocking IO whenever available. For example, if a datastore 
provides a non-blocking driver use that. E.g., ReactiveMongo, Rediscala ... 
3. Use non-blocking Logging. For example, Logback has a non-blocking async 
appender. 
4. Break down your actor receive logic into multiple parts and try to make 
each of these parts async and non-blocking as possible. This will allow 
each actor to be scheduled independently of the other, thereby increasing 
your parallelism. 
5. If some of your actors take longer to process then schedule them a 
different dispatcher. 
6. Tweak your dispatcher settings and take measurements to see which one is 
better. 
7. Add back pressure to bound your memory usage and max. your resource 
usage. I think reactive streams is your best bet here.

I'm experience with Akka is limited. I'm sure there are experts here who 
can give you much better guidance. 

-Soumya




 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Running a stream inside an Actor ?

2014-12-25 Thread Soumya Simanta
My understanding is that a running stream is a bunch of actors underneath 
executing the flow. Assuming this to be true, is there any restriction or 
concerns of running a stream inside a normal Akka actor ? 

Thanks
-Soumya


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Recording latency of operations that return a Future inside an akka stream

2014-12-25 Thread Soumya Simanta

This is related to this 
https://groups.google.com/forum/#!topic/akka-user/NrSkEwMrS3s thread but 
sufficiently different that I decided to create new thread. Hope that's 
okay. 

I would like to create a histogram of latency of a large number of set 
operations ( set returns a Future[Boolean]) using LatencyUtils 
https://github.com/LatencyUtils/LatencyUtils 

 Basically I need to start recording the time before the set operation 
(inside mapAsyncUnordered(k = redis.set(k + rnd, message))) and then 
somehow record the end time in a map operation( .map( //record the end time 
here) after this. I'm having a hard time trying to figure this out. 

My understanding is that the even though the mapAsyncUnordered doesn't 
maintain the order of operations the map following the mapAsynUnordered 
will maintain the order from the previous stage because of TCP maintaining 
the order. Is this correct? 


val redis = RedisClient(localhost)

val random1 = UUID.randomUUID().toString

def insertValues(rnd: String): Flow[Int, Boolean] = {
Flow[Int].mapAsyncUnordered(k = redis.set(k + rnd, message)).map( 
//record the end time here)
}

val blackhole = BlackholeSink

val maxSeq = 500
val seqSource = Source( () = (1 to maxSeq).iterator )
*val streamWithSeqSource = 
seqSource.via(insertValues(random1)).runWith(blackhole)*


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Trying to understand a sudden drop in throughput with Akka IO

2014-12-23 Thread Soumya Simanta
Endre, thank you again. 
I think you are correct. It looks like the primary limitation is around not 
being able to batch more operations in one network call (TCP). 
I increased the message size (10 times) and I'm able to send more bytes per 
second. At some point I'll hit the network limit. 

The following is for 1 million messages of around 10K each.

https://lh3.googleusercontent.com/-dQTzB-qMZwI/VJndn-gyogI/vaI/H02pACUqdlE/s1600/rediscala_network_IO_1Million_10kmsgsize.png


Can you explain a little more why you won't recommend going any higher than 
128 for the buffer size of FlowMaterializer? 

Also, is there a way I can measure the actual latency distribution while 
using the akka-streams? 
Something like HDRHistrogram of the all the requests. 

Thanks
-Soumya


On Tuesday, December 23, 2014 4:05:58 AM UTC-5, Akka Team wrote:

 Hi,

 On Tue, Dec 23, 2014 at 12:40 AM, Soumya Simanta soumya@gmail.com 
 javascript: wrote:

 Endre, 

 Thank you for taking the time to explain everything. It was really 
 helpful not only in understanding the streams basics but also to create a 
 better/faster version of what I'm trying to do. 
 Before I go any further I want to say that I love Akka streams and it is 
 going to be a useful API for a lot of my future work. Thanks to the Akka 
 team. 

 I tweaked both the dispatchers settings as well as the type of dispatcher 
 used by default dispatcher. The program still ends up taking a good deal of 
 my CPU (NOTE: The screenshot below is for FJP and not a ThreadpoolExecutor 
 but I see similar usage with TPE). 


 I wouldn't worry too much about CPU usage right now this can be an 
 artifact of various scheduling effects (there is a pinned dispatcher, FJP 
 can also distort measurements). You can try to use several parallel streams 
 instead of one and see how things scale out horizontally.
  

 The memory footprint is always under control as excepted. I gave 12G of 
 heap space to the JVM. 
 The frequency of young generation GC depends on the MaterializerSettings 
 buffer sizes. I've not tweaked the GC yet. Do you think that can make a 
 difference ? 


 Since more random elements (boxed integers) are kept in memory longer with 
 higher buffers sizes, this is expected. In reality you would store real 
 domain objects which are already allocated so that is less of an issue. 
  


 BTW, does the a size of 64 mean that there will be 64 items in each 
 buffer in the pipeline. I bumped it to 512 and saw an increase in 
 throughput. 


 I wouldn't go above 128.
  


 Here is the configuration and screenshots of one of the better runs I 
 had. I'm not sure if I'm limited by TCP or how Rediscala is using Akka IO 
 at this point. 
 Any further insights will be very useful and appreciated. In the mean 
 time I'll continue to play around with different values. 


 I believe you maxed out the streams part, so any other bottleneck will be 
 very likely in the Rediscala client or below. Your screenshot shows that 
 around 70MByte/s is achieved which around 0,5Gbit/s. Assuming that TCP is 
 used this is not bad at all.

 -Endre
  


 Thanks again !  

 My machine config is 

 *Darwin 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 00:26:44 PDT 
 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64*

   Processor Name: Intel Core i7 

   Processor Speed: 2.6 GHz

   Number of Processors: 1

   Total Number of Cores: 4

   L2 Cache (per Core): 256 KB

   L3 Cache: 6 MB

   Memory: 16 GB

 *application.conf *

 rediscala {
 rediscala-client-worker-dispatcher {
 mailbox-type = akka.dispatch.SingleConsumerOnlyUnboundedMailbox
 throughput = 1000
   }
 }

 actor {
   default-dispatcher {

 type = Dispatcher
 executor = fork-join-executor
 default-executor {
   fallback = fork-join-executor
 }

 # This will be used if you have set executor = fork-join-executor
 fork-join-executor {
   # Min number of threads to cap factor-based parallelism number to
   parallelism-min = 5

   # Max number of threads to cap factor-based parallelism number to
   parallelism-max = 5
 }
 throughput = 1000
   }
 }

 I'm using the following for the FlowMaterializer

  val settings = MaterializerSettings(system)
  implicit val materializer = 
 FlowMaterializer(settings.copy(maxInputBufferSize = *512*, 
 initialInputBufferSize = *512*))


 https://lh6.googleusercontent.com/-gLBJ7tgfRN4/VJipabIoLgI/vTw/9DdnDszQ55o/s1600/rediscala_network_IO_5Million_backpressure.png


 https://lh4.googleusercontent.com/-USKroaRYgco/VJipgfoSIOI/vT4/nUU-y9BCRXs/s1600/rediscala_network_IO_5Million_backpressure_threads.png


 https://lh6.googleusercontent.com/-w7_OA9C5f7k/VJipl0YW8-I/vUA/wpoDmD0F9xM/s1600/rediscala_network_IO_5Million_backpressure_cpu_memory.png


 On Monday, December 22, 2014 3:56:30 AM UTC-5, Akka Team wrote:

 Hi Soumya

 First of all, the performance of Akka IO (the original actor based one) 
 might be slow or fast

Re: [akka-user] Trying to understand a sudden drop in throughput with Akka IO

2014-12-23 Thread Soumya Simanta



 def insertValues(rnd: String): Flow[Int, Boolean] = {
 Flow[Int].mapAsyncUnordered(k = redis.set(k + rnd, message))
   }
 val maxSeq = 500
 val seqSource = Source( () = (1 to maxSeq).iterator )

 *val streamWithSeqSource = 
 seqSource.via(insertValues(random1)).runWith(blackhole)*

 My understanding  is that the next request is send to Redis from the 
 client only after a single Future is completed. Is this correct ? 


 No, the number of allowed uncompleted Futures is defined by the buffer 
 size. If there wouldn't be a parallelization between Futures then there 
 would be no need for an ordered and unordered version of the same operation.
  


Understood. So a map version will always be slower. 


 

 Is there a way I can batch a bunch of set requests and wait for them to 
 be over before I can send a new batch ? 


 If there would be a version of set that accepts a Seq[] of writes, let's 
 say batchSet then you could use:

 seqSource.grouped(100).mapAsyncUnordered(ks = redis.batchSet(...))

 Where grouped makes maximum 100 sized groups from the stream of elements 
 resulting in a stream of sequences. You need API support for that from the 
 Redis client though.


Yeah I tried that. Here is the code and the network IO. Throughput is 
better, of course at the cost of latency. I've not figured out a way to 
measure latency. Once I've a reliable way of doing so I can figure out what 
the difference in latency is. 

  seqSource.grouped(100).mapAsyncUnordered { grp = {
val tran = redis.transaction()
for (i - grp) yield {
  tran.set(i + random2, message)
}
tran.exec()
  }
  }.runWith(blackhole) 

https://lh6.googleusercontent.com/-uheE7cqhSgQ/VJn6arTCdxI/vaY/hNZgyozl1JE/s1600/5million_1k_messages_grouped100.png

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Trying to understand a sudden drop in throughput with Akka IO

2014-12-22 Thread Soumya Simanta
Endre, 

Thank you for taking the time to explain everything. It was really helpful 
not only in understanding the streams basics but also to create a 
better/faster version of what I'm trying to do. 
Before I go any further I want to say that I love Akka streams and it is 
going to be a useful API for a lot of my future work. Thanks to the Akka 
team. 

I tweaked both the dispatchers settings as well as the type of dispatcher 
used by default dispatcher. The program still ends up taking a good deal of 
my CPU (NOTE: The screenshot below is for FJP and not a ThreadpoolExecutor 
but I see similar usage with TPE). The memory footprint is always under 
control as excepted. I gave 12G of heap space to the JVM. 
The frequency of young generation GC depends on the MaterializerSettings 
buffer sizes. I've not tweaked the GC yet. Do you think that can make a 
difference ? 

BTW, does the a size of 64 mean that there will be 64 items in each buffer 
in the pipeline. I bumped it to 512 and saw an increase in throughput. 

Here is the configuration and screenshots of one of the better runs I had. 
I'm not sure if I'm limited by TCP or how Rediscala is using Akka IO at 
this point. 
Any further insights will be very useful and appreciated. In the mean time 
I'll continue to play around with different values. 

Thanks again !  

My machine config is 

*Darwin 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 00:26:44 PDT 2014; 
root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64*

  Processor Name: Intel Core i7 

  Processor Speed: 2.6 GHz

  Number of Processors: 1

  Total Number of Cores: 4

  L2 Cache (per Core): 256 KB

  L3 Cache: 6 MB

  Memory: 16 GB

*application.conf *

rediscala {
rediscala-client-worker-dispatcher {
mailbox-type = akka.dispatch.SingleConsumerOnlyUnboundedMailbox
throughput = 1000
  }
}

actor {
  default-dispatcher {

type = Dispatcher
executor = fork-join-executor
default-executor {
  fallback = fork-join-executor
}

# This will be used if you have set executor = fork-join-executor
fork-join-executor {
  # Min number of threads to cap factor-based parallelism number to
  parallelism-min = 5

  # Max number of threads to cap factor-based parallelism number to
  parallelism-max = 5
}
throughput = 1000
  }
}

I'm using the following for the FlowMaterializer

 val settings = MaterializerSettings(system)
 implicit val materializer = 
FlowMaterializer(settings.copy(maxInputBufferSize = *512*, 
initialInputBufferSize = *512*))

https://lh6.googleusercontent.com/-gLBJ7tgfRN4/VJipabIoLgI/vTw/9DdnDszQ55o/s1600/rediscala_network_IO_5Million_backpressure.png

https://lh4.googleusercontent.com/-USKroaRYgco/VJipgfoSIOI/vT4/nUU-y9BCRXs/s1600/rediscala_network_IO_5Million_backpressure_threads.png

https://lh6.googleusercontent.com/-w7_OA9C5f7k/VJipl0YW8-I/vUA/wpoDmD0F9xM/s1600/rediscala_network_IO_5Million_backpressure_cpu_memory.png


On Monday, December 22, 2014 3:56:30 AM UTC-5, Akka Team wrote:

 Hi Soumya

 First of all, the performance of Akka IO (the original actor based one) 
 might be slow or fast, but it does not degrade if writes are properly 
 backpressured. Also it does not use Futures at all, so I guess this is an 
 artifact of how you drive it.

 Now your first reactive-streams approach didn't work because the map 
 stage that created an actor already let in the next element as soon as the 
 actor was created. What you want is to let in the next element after the 
 elements has been written. In other words you just created an ever growing 
 number of actors without waiting for the write to complete.

 Your second solution is correct though because mapAsyncUnordered only lets 
 in the next elements when the passed future completes -- which in your case 
 corresponds to a finished write. 

 As for the CPU usage, without the actual profile it doesn't say too much. 
 For example if your ForkJoinPool (the default dispatcher) has not much work 
 to do, it will spend a lot of time in its work stealing cycle (scan()) 
 because there is no work to steal. This is purely an artifact of that pool 
 and has nothing to to with actual CPU usage of the application. You can try 
 with an Executor based pool if you want to test this.

 If you really want to play around to see how much throughput is possible, 
 you should try the following approaches step by step:

  - Increase the throughput setting of the dispatcher that executes the 
 stream and redis client. You can try values from 100 to even 1000
  - Increase the default buffer size of the stream materializer (you can 
 pass a MaterializerSettings object). You should try buffer sizes of 32, 64, 
 128.

 Btw, streams are currently not optimized at all, so don't get overly high 
 expectations yet :)

 -Endre

 On Mon, Dec 22, 2014 at 4:53 AM, Soumya Simanta soumya@gmail.com 
 javascript: wrote:

 Looks like my akka-streams code was not doing back pressure. Not sure how 
 I

Re: [akka-user] Trying to understand a sudden drop in throughput with Akka IO

2014-12-21 Thread Soumya Simanta
 by comparison. It's smaller, simpler, but still looks 
familiar. Personally, I like to remember why we made ZeroMQ at all, because 
that's most likely where

}

https://lh6.googleusercontent.com/-3ymr3gm5E9U/VJb7RnuFuGI/vTM/qKDLpLMvlF4/s1600/rediscala_network_io_100actors.png


On Saturday, December 20, 2014 9:07:53 AM UTC-5, Soumya Simanta wrote:

 Endre, thank you for responding. Following is what the author of Rediscala 
 has to say. 

 *Yes i noticed it during my tests, at some point the scale is exponential 
 (bad).*


 *I suspected the thread scheduler to be the limitation.Or the way 
 Future.sequence works.*

 *If you can isolate a test that scale linearly up to 1M of futures, I 
 would be interested to see it. By replacing akka-io with another java.nio 
 library (xnio) I was able to pass the 1M req (at the speed of around 500k 
 req/s) *

 https://github.com/etaty/rediscala/issues/67

 If replacing akka-io with java.nio resolves this then either akka-io is 
 not used correctly in Rediscala OR it is a fundamental limitation of 
 akka-io. 

 My other responses inline. 


 On Saturday, December 20, 2014 6:35:22 AM UTC-5, Akka Team wrote:

 Hi,

 My personal guess is that since you don't obey any backpressure when you 
 start flooding the redis client with requests you end up with a lot of 
 queued messages and probably high GC pressure. You can easily test this by 
 looking at the memory profile of your test.


 Yes, the memory pressure is indeed high. The young generation (Edge) space 
 fills up very quickly and then a minor GC is kicked off. 
 Can I use akka-streams to resolve and add backpressure here? Any pointers 
 here will be greatly appreciated. 
  



 On Sat, Dec 20, 2014 at 6:55 AM, Soumya Simanta soumya@gmail.com 
 wrote:

 val res: Future[List[Boolean]] = Future.sequence(result.toList) val end 
 = System.currentTimeMillis() val diff = (end - start) println(sfor msgSize 
 $msgSize and numOfRuns [$numberRuns] time is $diff ms ) 
 akkaSystem.shutdown() 

 }

 What does the above code intend to measure? Didn't you want to actually 
 wait on the res future?

 Yes, you are correct again. I should be waiting on res in order to get an 
 estimate of overall latency. 
  


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Trying to understand a sudden drop in throughput with Akka IO

2014-12-21 Thread Soumya Simanta
Looks like my akka-streams code was not doing back pressure. Not sure how I 
can change it handle back pressure. 

Then I changed my code to the following. I borrowed the code from one of 
the Akka stream activator examples (WritePrimes). I added a buffer in 
between that also helped significantly. 


  val maxRandomNumberSize = 100
*  val randomSource = Source(() = 
Iterator.continually(ThreadLocalRandom.current().nextInt(maxRandomNumberSize)))*

  def insertValues : Flow[Int,Boolean] = {
Flow[Int].mapAsyncUnordered(k = redis.set(k + random, message))
  }

  val blackhole = BlackholeSink

  //val stream  = source.via(insertValues).runWith(blackhole) //No buffer
  val streamWithRandomSource = randomSource.*buffer(2, 
OverflowStrategy.backpressure)*.via(insertValues).runWith(blackhole)

Not the network IO looks much more uniform. It's a pleasure to see back 
pressure work (visually) :-)

https://lh5.googleusercontent.com/-wpEVIqgvT3k/VJeVkEsl50I/vTc/ZGlyonJbof8/s1600/rediscala_network_io_1actor_backpressure.png


I did see my CPU usage bump up in this version. Any reason why ? 


On Sunday, December 21, 2014 11:55:01 AM UTC-5, Soumya Simanta wrote:

 Here is my attempt to create a version with back pressure with Reactive 
 Stream. Not sure if it completely correct or not. Can someone please verify 
 if the code below is correct? 
 Even with this version I don't see any change is throughput and the 
 network IO graph looks very similar to what I had without using reactive 
 streams. 

 On the other hand if I use 100 Rediscala client actors the inserts of much 
 faster. I understand that now there are 100 queues (mailboxes) and 
 therefore its faster. But I still don't understand why the performance is 
 so bad for a single client after a certain threshold, even after using back 
 pressure (assuming I'm using Akka streams correctly).


 *Code with Akka streams and one Rediscala client. *

 import java.util.UUID

 import akka.actor.ActorSystem
 import akka.stream.FlowMaterializer
 import akka.stream.scaladsl.Source
 import akka.util.ByteString
 import redis.RedisClient

 object RedisStreamClient extends App {

   val message = How to explain ZeroMQ? Some of us start by saying all 
 the wonderful things it does. It's sockets on steroids. It's like mailboxes 
 with routing. It's fast! Others try to share their moment of enlightenment, 
 that zap-pow-kaboom satori paradigm-shift moment when it all became 
 obvious. Things just become simpler. Complexity goes away. It opens the 
 mind. Others try to explain by comparison. It's smaller, simpler, but still 
 looks familiar. Personally, I like to remember why we made ZeroMQ at all, 
 because that's most likely where you, the reader, still are today.How to 
 explain ZeroMQ? Some of us start by saying all the wonderful things it 
 does. It's sockets on steroids. It's like mailboxes with routing. It's 
 fast! Others try to share their moment of enlightenment, that 
 zap-pow-kaboom satori paradigm-shift moment when it all became obvious. 
 Things just become simpler. Complexity goes away. It opens the mind. Others 
 try to explain by comparison. It's smaller, simpler, but still looks 
 familiar. Personally, I like to remember why we made ZeroMQ at all, because 
 that's most likely where

   implicit val system = ActorSystem(Sys)

   implicit val materializer = FlowMaterializer()

   val msgSize = message.getBytes.size

   val redis = RedisClient()
   implicit val ec = redis.executionContext
   val random = UUID.randomUUID().toString

   val source = Source( () = (1 to 100).iterator )
   source.map{  x = x + 1 }.foreach( x = redis.set(random+x.toString, 
 ByteString(message) ) ).onComplete( _ = system.shutdown())

 }


 https://lh4.googleusercontent.com/-5aG6CMqBLcA/VJb7J3qqdVI/vTE/kkdPVBT6AbQ/s1600/rediscala_network_IO.png



 *Code for with 100 Rediscala clients.*

 import akka.actor.{ActorLogging, Props, Actor}
 import akka.util.ByteString
 import redisbenchmark.RedisBenchmarkActor.InsertValues

 import scala.concurrent.duration.Duration
 import scala.concurrent.{Await, Future}
 import redis.RedisClient

 import java.util.UUID


 object RedisLocalPerfMultipleActors {


   def main(args: Array[String]) : Unit = {
 implicit val akkaSystem = akka.actor.ActorSystem()
 //create 100 RedisClient actors
 val actors = 1 to 100
 actors.map{ x = akkaSystem.actorOf(Props(new 
 RedisBenchmarkActor(1)), actor+x) }.map{ actor = actor ! 
 InsertValues}

 //TODO shutdown the actor system
 *//Not sure how to wait for all Futures to complete before shutting 
 down the actor system*

   }

 }


 class RedisBenchmarkActor(runs: Int) extends Actor with ActorLogging {

   val redis = RedisClient()
   //implicit val ec = redis.executionContext
   log.info(sActor created with $runs )

   def receive = {

 case InsertValues = {

   log.info(Inserting values )

   val random = UUID.randomUUID().toString
   val start

Re: [akka-user] Trying to understand a sudden drop in throughput with Akka IO

2014-12-20 Thread Soumya Simanta
Endre, thank you for responding. Following is what the author of Rediscala 
has to say. 

*Yes i noticed it during my tests, at some point the scale is exponential 
(bad).*


*I suspected the thread scheduler to be the limitation.Or the way 
Future.sequence works.*

*If you can isolate a test that scale linearly up to 1M of futures, I would 
be interested to see it. By replacing akka-io with another java.nio library 
(xnio) I was able to pass the 1M req (at the speed of around 500k req/s) *

https://github.com/etaty/rediscala/issues/67

If replacing akka-io with java.nio resolves this then either akka-io is not 
used correctly in Rediscala OR it is a fundamental limitation of akka-io. 

My other responses inline. 


On Saturday, December 20, 2014 6:35:22 AM UTC-5, Akka Team wrote:

 Hi,

 My personal guess is that since you don't obey any backpressure when you 
 start flooding the redis client with requests you end up with a lot of 
 queued messages and probably high GC pressure. You can easily test this by 
 looking at the memory profile of your test.


Yes, the memory pressure is indeed high. The young generation (Edge) space 
fills up very quickly and then a minor GC is kicked off. 
Can I use akka-streams to resolve and add backpressure here? Any pointers 
here will be greatly appreciated. 
 



 On Sat, Dec 20, 2014 at 6:55 AM, Soumya Simanta soumya@gmail.com 
 javascript: wrote:

 val res: Future[List[Boolean]] = Future.sequence(result.toList) val end = 
 System.currentTimeMillis() val diff = (end - start) println(sfor msgSize 
 $msgSize and numOfRuns [$numberRuns] time is $diff ms ) 
 akkaSystem.shutdown() 

 }

 What does the above code intend to measure? Didn't you want to actually 
 wait on the res future?

 Yes, you are correct again. I should be waiting on res in order to get an 
estimate of overall latency. 
 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Orleans - Open source Actor model implementation by Microsoft

2014-12-16 Thread Soumya Simanta
http://orleans.codeplex.com/

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Fundamental Akka Dispatcher questions

2014-12-10 Thread Soumya Simanta
Thank you Björn. This is very helpful. 

-Soumya


On Wednesday, December 10, 2014 3:28:43 PM UTC-5, Björn Antonsson wrote:

 Hi Soumya,

 On 8 December 2014 at 06:47:04, Soumya Simanta (soumya@gmail.com 
 javascript:) wrote:

 This question might seem simple to a few but I'm still trying to 
 understand how things actually work in Akka in terms of the dispatcher. 
 I'm hoping Akka experts here can help me with this. Thanks in advance. 

 I've read and heard in a number of places that it's always a good idea to 
 provide different dispatchers for different parts of your application 
 (ActorSystem).  
 My understanding is that a dispatcher corresponding to a thread-pool. Is 
 this correct?  

 1. Will each dispatcher have equal priority i.e., none of the actors for 
 any dispatcher starve ? 


 The dispatchers are basically thread pools, and if you don't configure 
 them in any special way, the threads will all have the same priority.



 2. In case of multiple dispatchers if a few dispatcher have idle threads 
 and other dispatchers have more work than threads - won't this lead to 
 under utilization of CPU? 


 This of course depends on the number of threads and CPU cores you have. 
 All runnable threads (regard;ess of dispatcher) will try to run and be 
 scheduled by the operating system.



 3. Is there an overhead of switching between dispatchers ? For example, if 
 the Actor System have only one dispatcher with 100 threads vs 10 
 dispatchers with 10 threads each. Are both these configurations equivalent 
 in terms of performance and context switching overhead?


 Switching between threads always has some overhead. That they are 
 artificially grouped into a dispatcher doesn't affect this.




 4. Is there any difference in performance in the following two 
 configurations: 
 4.a. Imagine two dispatchers Dispatcher-A and Dispatcher-B. Now if we run 
 only one type of task (i.e., message type in the mailbox of Actor-A is of 
 type Msg-A and of Actor-B is of type Msg-B) on each dispatcher. 
 4.b. Only one dispatcher (Dispatcher) that handles Actor-A and Actor-B 
 messages, Msg-A and Msg-B respectively. 


 That is a too general performance question. It dfepends on the time it 
 takes to process messages of type A and B and how many messages of each 
 type there is. The rule of thumb is though that you don't need to start 
 tweaking dispatchers until you have identified an unwanted behavior or 
 performance bottleneck, or that you know up front that the processing of a 
 certain type of message takes a long time, and would effectively starve the 
 processing of other messages.

 B/

 Thanks
 -Soumya




 --
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.





 -- 
 Björn Antonsson
 Typesafe http://typesafe.com/ – Reactive Apps on the JVM
 twitter: @bantonsson http://twitter.com/#!/bantonsson



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Application throughput decreasing with time

2014-12-09 Thread Soumya Simanta
What kind of work is done by your server actors ? I mean what's in the 
receive method of the server actors ? 
Most likely your actor's mailbox is getting bigger with time. 

On Tuesday, December 9, 2014 7:31:40 PM UTC-5, Yogesh wrote:

 Hi,

 I am writing a client-server application using akka and scala which works 
 as follows:

 On client machine I create 10 user actors, each with its own scheduler
 All schedulers sends messages to server after a specified duration 
 continuously

 On Server machine I have a round robin router of server actors
 Server actors process messages received from user actors

 Problem Description: The application works fine for a while but its 
 throughput starts decreasing with time and by throughput I mean the number 
 of messages processed by the server actors as well as the messages sent by 
 the user actors

 Could someone please explain me why this is happening and how can it be 
 resolved?

 Thanks in advance.


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Application throughput decreasing with time

2014-12-09 Thread Soumya Simanta
Yogesh, 

Are the client and server running on the same physical node ? 
How long does it take for things to slow down? 

I would suggest you look into the actor's mailbox both on the client as 
well as the server. 
Also, look at the state of the JVM using something like VisualVM/YourKit. 

-Soumya



On Tuesday, December 9, 2014 10:47:31 PM UTC-5, Yogesh wrote:

 Hi Soumya,

 Thanks for your reply.
 User actors are sending character arrays of size 140 and server actors are 
 storing those in a shared datastore (which is basically a concurrent 
 TrieMap)

 While loading testing I observed that both the server and the client gets 
 slow with time. With time client (user actors) sends less number of 
 messages and server processes even lesser of what client is sending.

 Regards,
 Yogesh

 On Tuesday, December 9, 2014 10:40:08 PM UTC-5, Soumya Simanta wrote:

 What kind of work is done by your server actors ? I mean what's in the 
 receive method of the server actors ? 
 Most likely your actor's mailbox is getting bigger with time. 

 On Tuesday, December 9, 2014 7:31:40 PM UTC-5, Yogesh wrote:

 Hi,

 I am writing a client-server application using akka and scala which 
 works as follows:

 On client machine I create 10 user actors, each with its own 
 scheduler
 All schedulers sends messages to server after a specified duration 
 continuously

 On Server machine I have a round robin router of server actors
 Server actors process messages received from user actors

 Problem Description: The application works fine for a while but its 
 throughput starts decreasing with time and by throughput I mean the number 
 of messages processed by the server actors as well as the messages sent by 
 the user actors

 Could someone please explain me why this is happening and how can it be 
 resolved?

 Thanks in advance.



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Application throughput decreasing with time

2014-12-09 Thread Soumya Simanta
You can use http://kamon.io/ to monitor your application. They have a 
Docker image that contains everything. You need to configure your 
application.conf to use Kamon. 

How does your heap memory look like on YourKit? and how frequently is the 
garbage collector doing minor and full GC? 

Here other things you can try: 

1. Just try discarding the message on the server actor (instead of putting 
it in the concurrent TrieMap and see if your behavior changes ? 
One possible cause maybe that the TrieMap is not able to handle the rate of 
your messages, causing things to back up. 

2. Try reducing the number of client actors to 10K and see if something 
changes. 

3. Try increasing the number of actors for the round-robin router. 

 




On Tuesday, December 9, 2014 11:33:35 PM UTC-5, Yogesh wrote:

 Soumya,

 On Tuesday, December 9, 2014 11:18:42 PM UTC-5, Soumya Simanta wrote:

 Yogesh, 

 Are the client and server running on the same physical node ? 

 [YA] Even if I run client and server on different machines I am getting 
 similar result
  

 How long does it take for things to slow down? 

  [YA] Within 100-200 sec things start slowing down 


 I would suggest you look into the actor's mailbox both on the client as 
 well as the server. 

   [YA] I thought of doing that but I couldn't find a way to inquire 
 actor's mailbox size. Do you know how that can be done?
  

 Also, look at the state of the JVM using something like VisualVM/YourKit. 

 [YA] I already did it using YourKit profiler. I observed that 2 
 back-off-remote-dispatchers were using 20% of the CPU time and one of the 
 IO-workers was using 9% of CPU, and I observed that most of the 
 default-dispatcher and thread-pool instances were in waiting state. I am 
 not sure what were they waiting for.


 -Soumya



 On Tuesday, December 9, 2014 10:47:31 PM UTC-5, Yogesh wrote:

 Hi Soumya,

 Thanks for your reply.
 User actors are sending character arrays of size 140 and server actors 
 are storing those in a shared datastore (which is basically a concurrent 
 TrieMap)

 While loading testing I observed that both the server and the client 
 gets slow with time. With time client (user actors) sends less number of 
 messages and server processes even lesser of what client is sending.

 Regards,
 Yogesh

 On Tuesday, December 9, 2014 10:40:08 PM UTC-5, Soumya Simanta wrote:

 What kind of work is done by your server actors ? I mean what's in the 
 receive method of the server actors ? 
 Most likely your actor's mailbox is getting bigger with time. 

 On Tuesday, December 9, 2014 7:31:40 PM UTC-5, Yogesh wrote:

 Hi,

 I am writing a client-server application using akka and scala which 
 works as follows:

 On client machine I create 10 user actors, each with its own 
 scheduler
 All schedulers sends messages to server after a specified duration 
 continuously

 On Server machine I have a round robin router of server actors
 Server actors process messages received from user actors

 Problem Description: The application works fine for a while but its 
 throughput starts decreasing with time and by throughput I mean the 
 number 
 of messages processed by the server actors as well as the messages sent 
 by 
 the user actors

 Could someone please explain me why this is happening and how can it 
 be resolved?

 Thanks in advance.



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Fundamental Akka Dispatcher questions

2014-12-07 Thread Soumya Simanta
This question might seem simple to a few but I'm still trying to understand 
how things actually work in Akka in terms of the dispatcher. 
I'm hoping Akka experts here can help me with this. Thanks in advance. 

I've read and heard in a number of places that it's always a good idea to 
provide different dispatchers for different parts of your application 
(ActorSystem).  
My understanding is that a dispatcher corresponding to a thread-pool. Is 
this correct?  

1. Will each dispatcher have equal priority i.e., none of the actors for 
any dispatcher starve ? 
2. In case of multiple dispatchers if a few dispatcher have idle threads 
and other dispatchers have more work than threads - won't this lead to 
under utilization of CPU? 
3. Is there an overhead of switching between dispatchers ? For example, if 
the Actor System have only one dispatcher with 100 threads vs 10 
dispatchers with 10 threads each. Are both these configurations equivalent 
in terms of performance and context switching overhead?

4. Is there any difference in performance in the following two 
configurations: 
4.a. Imagine two dispatchers Dispatcher-A and Dispatcher-B. Now if we run 
only one type of task (i.e., message type in the mailbox of Actor-A is of 
type Msg-A and of Actor-B is of type Msg-B) on each dispatcher. 
4.b. Only one dispatcher (Dispatcher) that handles Actor-A and Actor-B 
messages, Msg-A and Msg-B respectively. 

Thanks
-Soumya




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: [ANNOUNCE] Akka Streams HTTP 1.0 MILESTONE 1

2014-12-05 Thread Soumya Simanta
This is great work by the Akka team. Congrats and thank you ! 

I've a stream processing system build in Akka where I've to explicitly 
handle back pressure (which is kind of painful). 
Really glad that now it will be supported out of the box. 

Just curious about as how much akka-http is better (in terms of performance 
and other attributes) over Netty? 
I'm sure there must be good reasons for moving away from Netty to akka-http 
on future versions of Play. Would appreciate if someone can summarize them 
or point to rationale for the change. 

Thanks again for the awesome work. 

-Soumya





On Friday, December 5, 2014 10:59:33 AM UTC-5, Jonas Bonér wrote:



 On Fri, Dec 5, 2014 at 2:42 PM, pagoda_5b ivano@gmail.com 
 javascript: wrote:

 Very nice job indeed.
 Just a question, but this could be the wrong place so please redirect me 
 if needed.

 What about overlapping features between akka-http and playframework ?
 Does typesafe has any plan to integrate the two, choose akka http for 
 both, or to let them proceed on separate paths?


 The plan is to put Play on top of: 
 - akka-streams: as a simpler alternative to Iteratees (which might get 
 deprecated)
 - akka-http: instead of running on top of Netty (see this PR 
 https://github.com/playframework/playframework/pull/3570) 

 So Play will be layered right on top of akka-http which is sitting right 
 on top of akka-streams. 

 I'm mainly concerned about the definition of routes and request handlers, 
 which differs in design between the projects.
  

 Thanks and cheers
 Ivano

 On Thursday, December 4, 2014 6:08:30 PM UTC+1, rkuhn wrote:

 Dear hakkers,

 we are very pleased to announce the availability of the first milestone 
 release of the upcoming Akka Streams and Akka HTTP modules. The 
 significance of this milestone is that we have roughly reached feature 
 parity with Spray (sans spray-servlet that we will not port) and the 
 underlying streams are mature enough to take a closer look at their API.

 *It is important to note that we have focused entirely on the API and 
 its semantics, performance has not yet been optimized. Any benchmarks done 
 at this point are likely to be invalidated within weeks as we continue to 
 work on these projects.* (Nevertheless we find it performant enough to 
 play around with, so don’t hold back!)

 So, what is it that we are so excited about? Place a dependency on 
 com.typesafe.akka % akka-stream-experimental_2.11 % 1.0-M1 (also 
 available for Scala 2.10) into your build definition and you can write an 
 echo server implementation like so:

 val toUppercase = Flow[ByteString].map(bs = bs
 .map(_.toChar.toUpper.asInstanceOf[Byte]))

 StreamTcp().bind(serverAddress).connections.foreach { conn =
   println(Client connected from:  + conn.remoteAddress)
   conn handleWith toUppercase
 }

 and then you can connect to this server and use it as an uppercase 
 service:

 val result: Future[ByteString] =
   Source(testInput)
   .via(StreamTcp().outgoingConnection(serverAddress).flow)
   .fold(ByteString()) { (acc, in) ⇒ acc ++ in }

 This will of course return a Future in order to keep everything nicely 
 asynchronous and non-blocking. For more details you can take a look at the 
 API docs (for Java 
 http://doc.akka.io/japi/akka-stream-and-http-experimental/1.0-M1/ and 
 Scala 
 http://doc.akka.io/api/akka-stream-and-http-experimental/1.0-M1/#package).
  
 On the HTTP front we kept the routing DSL very close to the one you know 
 from spray-routing, and we supplemented it with a nice Java DSL as well 
 (for 
 the full program see here 
 https://github.com/akka/akka/blob/akka-stream-and-http-experimental-1.0-M1/akka-http-java-tests/src/main/java/akka/http/server/japi/examples/simple/SimpleServerApp.java,
  
 you’ll need to add akka-http-java-experimental_2.11 as dependency):

 route(
 // matches the empty path
 pathSingleSlash().route(
 getFromResource(web/calculator.html)
 ),
 // matches paths like this: /add?x=42y=23
 path(add).route(
 handleWith(addHandler, x, y)
 ),
 // matches paths like this: /multiply/{x}/{y}
 path(multiply, xSegment, ySegment).route(
 // bind handler by reflection
 handleWith(SimpleServerApp.class, multiply, xSegment, 
 ySegment)
 )
 )

 You can find examples also in Activator templates (for Java 
 https://typesafe.com/activator/template/akka-stream-java8 and Scala 
 https://typesafe.com/activator/template/akka-stream-scala).

 Over the next weeks we will continue to work on smoothing out kinks and 
 edges, in particular those that you—dear reader—report, either on this 
 mailing list or via github issues 
 https://github.com/akka/akka/issues/new. Prominent features that are 
 yet to be done are SSL support and WebSockets, for an exhaustive list you 
 can check our ticket list 
 

Re: [akka-user] How to corroborate akka messages and their requests

2014-11-30 Thread Soumya Simanta
As far as I know the following only two options: 

1. Use a requestId (or context object as in Spray) that you pass along with 
your messages to your actors in the chain. The advantage is that you are 
not setting a timeout here. But you have to deal with co-relating messages 
yourself. 
2. Use the ask pattern where you need to set a timeout but Akka will take 
care of getting the matching the request with the response. 

I was wondering if there is any other way of doing this ? 

Thanks
-Soumya


On Sunday, November 30, 2014 5:39:47 AM UTC-5, Balázs Kossovics wrote:

 Hi Karthik,

 Did you check out the ask pattern (
 http://doc.akka.io/docs/akka/snapshot/scala/actors.html#Ask__Send-And-Receive-Future)?
  
 It may be the thing you need.


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How to corroborate akka messages and their requests

2014-11-30 Thread Soumya Simanta
Roland, 

Thanks for explaining this very nicely. This is consistent with my 
understanding. 

I try to avoid using the ask pattern (if I can) because of the timeout 
issue. 

So basically if you have complete control over all your messages, it better 
to explicitly pass a unique message identifier and use that to track the 
request-response flow. 
Do you agree with this recommendation? 

Thanks again ! 
-Soumya



On Sunday, November 30, 2014 10:02:17 AM UTC-5, rkuhn wrote:


 30 nov 2014 kl. 14:53 skrev Soumya Simanta soumya@gmail.com 
 javascript::

 As far as I know the following only two options: 

 1. Use a requestId (or context object as in Spray) that you pass along 
 with your messages to your actors in the chain. The advantage is that you 
 are not setting a timeout here. But you have to deal with co-relating 
 messages yourself. 
 2. Use the ask pattern where you need to set a timeout but Akka will take 
 care of getting the matching the request with the response. 

 I was wondering if there is any other way of doing this ? 


 If you consider that the real message is Envelope(payload, sender), these 
 two options collapse into one: the only way to make sense of the response 
 is to include identifying information in the request, which can either be 
 placed in the payload or the sender fields. Using “ask” does the latter by 
 creating a unique one-time recipient.

 You can think of the difference also as

1. the meta-information travels with the message (and needs to be 
understood by the recipient)
2. the meta-information stays with the sender (in the form of the 
“ask”-ActorRef and possible Future transformation closures)


 The second case is the only possibility if the recipient’s protocol does 
 not allow disambiguation:

 case Whatever(..., x) =
   otherActor ? Request(...) collect {
 case r: Response = ResponseWithContext(r, x)
   } pipeTo self
 case ResponseWithContext(r, x) =
   // continue the process

 The value `x` above is the identifying piece that allows the actor to keep 
 different requests separate, and if the `otherActor` cannot pass back this 
 information in its `Response` then we can remember it in the `collect` 
 closure and piece things together afterwards.

 This should motivate why you should always include client-chosen 
 identifiers in the Actor protocols you design, because that makes this kind 
 of dance unnecessary (i.e. `Request` should allow passing along `x`—usually 
 called a correlation ID—and Response should just include that value as 
 well).

 Regards,

 Roland


 Thanks
 -Soumya


 On Sunday, November 30, 2014 5:39:47 AM UTC-5, Balázs Kossovics wrote:

 Hi Karthik,

 Did you check out the ask pattern (
 http://doc.akka.io/docs/akka/snapshot/scala/actors.html#Ask__Send-And-Receive-Future)?
  
 It may be the thing you need.


 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 *Dr. Roland Kuhn*
 *Akka Tech Lead*
 Typesafe http://typesafe.com/ – Reactive apps on the JVM.
 twitter: @rolandkuhn
 http://twitter.com/#!/rolandkuhn
  


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka actors not processing messages after a while

2014-11-28 Thread Soumya Simanta


 . Sometimes you have to deviate from that principle and then you have to 
 carefully manage the complexity that arise.


 Patrik, 

Can you give an example when you have to deviate and shared mutable state 
between actors ? 

Thanks
-Soumya
 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Streams documentation and examples

2014-11-25 Thread Soumya Simanta
Thanks Patrik. Hope to see some more documentation soon. 

On Tuesday, November 25, 2014 2:13:26 AM UTC-5, Patrik Nordwall wrote:

 There is no reference documentation yet.

 You find API documentation here:

 http://doc.akka.io/api/akka-stream-and-http-experimental/0.11/

 http://doc.akka.io/japi/akka-stream-and-http-experimental/0.11/


 and getting started Activator templates:

 http://typesafe.com/activator/template/akka-stream-scala

 http://typesafe.com/activator/template/akka-stream-java8

 Regards,
 Patrik

 On Tue, Nov 25, 2014 at 7:19 AM, Soumya Simanta soumya@gmail.com 
 javascript: wrote:

 I tried this, but cannot find anything there. 

 http://doc.akka.io/docs/akka-stream-and-http-experimental/0.11/scala.html?_ga=1.67564625.1938417986.1393726085



 On Tuesday, November 25, 2014 1:09:46 AM UTC-5, Piyush Mishra wrote:

 Visit akka.io

 Piyush Mishra
 *Blog* https://www.linkedin.com/in/piyush1989 | *LinkedIn 
 https://www.linkedin.com/in/piyush1989*

 Skype : piyush.mishra275
 Hangout :piyushmishra889
 Mobile : +91-8860876875

 On Tue, Nov 25, 2014 at 11:38 AM, Soumya Simanta soumya@gmail.com 
 wrote:

 Can someone point to documentation and examples for Akka Streams. 

 Thanks
 -Soumya

  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: http://doc.akka.io/docs/akka/
 current/additional/faq.html
  Search the archives: https://groups.google.com/
 group/akka-user
 --- 
 You received this message because you are subscribed to the Google 
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to akka-user+...@googlegroups.com.
 To post to this group, send email to akka...@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.


  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 

 Patrik Nordwall
 Typesafe http://typesafe.com/ -  Reactive apps on the JVM
 Twitter: @patriknw

  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Trying to figure out why all threads for a dispatcher will block at the same time consistently

2014-11-25 Thread Soumya Simanta
Yogesh, 

The screenshots are from YourKit JVM profiler. You can also use VisualVM. 

-Soumya


On Tuesday, November 25, 2014 3:12:44 PM UTC-5, Yogesh wrote:

 Hi Soumya,

 I am facing a similar issue with my application. Could you please let me 
 know the tool you used to monitor the state of the running threads in your 
 application. I would like to confirm if this exactly what is happening with 
 my application also.

 Thanks,
 Yogesh

 On Tuesday, November 25, 2014 12:12:14 PM UTC-5, Soumya Simanta wrote:

 Not all threads in my dispatcher are running to 100%. 
 Following is my dispatcher. 

   my-dispatcher{
 type = Dispatcher
 executor = fork-join-executor
 fork-join-executor {
   # Min number of threads to cap factor-based parallelism number to
   parallelism-min = 8
   # Parallelism (threads) ... ceil(available processors * factor)
   parallelism-factor = 2.0
   # Max number of threads to cap factor-based parallelism number to
   parallelism-max = 120
 }
 throughput = 200
   }

 I'm calling my actors using a router. 

   val actor = context.actorOf(Props(new MyActor(pubRedisClient, 
 pubChanName)).withRouter(SmallestMailboxRouter(nrOfInstances = 
 20)).withDispatcher(akka.my-dispatcher), name = my-analyzer-router)

 What I cannot understand is that all threads in my dispatching go into 
 the blocking state at the same time (please see the screenshots). Looks 
 like this is happening consistently. My intuition is if there is a blocking 
 piece of code then it's very strange that the blocking piece is being 
 executed at the exact same time again and again. 

 Any idea why this would be happening? 

 Thanks
 -Soumya







-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Trying to figure out why all threads for a dispatcher will block at the same time consistently

2014-11-25 Thread Soumya Simanta
Looks like the following is the call that blocks. 

ForkJoinPool.java:2075 sun.misc.Unsafe.park(boolean, long) 



On Tuesday, November 25, 2014 12:12:14 PM UTC-5, Soumya Simanta wrote:

 Not all threads in my dispatcher are running to 100%. 
 Following is my dispatcher. 

   my-dispatcher{
 type = Dispatcher
 executor = fork-join-executor
 fork-join-executor {
   # Min number of threads to cap factor-based parallelism number to
   parallelism-min = 8
   # Parallelism (threads) ... ceil(available processors * factor)
   parallelism-factor = 2.0
   # Max number of threads to cap factor-based parallelism number to
   parallelism-max = 120
 }
 throughput = 200
   }

 I'm calling my actors using a router. 

   val actor = context.actorOf(Props(new MyActor(pubRedisClient, 
 pubChanName)).withRouter(SmallestMailboxRouter(nrOfInstances = 
 20)).withDispatcher(akka.my-dispatcher), name = my-analyzer-router)

 What I cannot understand is that all threads in my dispatching go into the 
 blocking state at the same time (please see the screenshots). Looks like 
 this is happening consistently. My intuition is if there is a blocking 
 piece of code then it's very strange that the blocking piece is being 
 executed at the exact same time again and again. 

 Any idea why this would be happening? 

 Thanks
 -Soumya







-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Trying to figure out why all threads for a dispatcher will block at the same time consistently

2014-11-25 Thread Soumya Simanta
I'm using the following Scala and Akka versions. 

scalaVersion := 2.10.3
val akkaVersion = 2.3.6



On Tuesday, November 25, 2014 5:39:52 PM UTC-5, Soumya Simanta wrote:

 Looks like the following is the call that blocks. 

 ForkJoinPool.java:2075 sun.misc.Unsafe.park(boolean, long) 



 On Tuesday, November 25, 2014 12:12:14 PM UTC-5, Soumya Simanta wrote:

 Not all threads in my dispatcher are running to 100%. 
 Following is my dispatcher. 

   my-dispatcher{
 type = Dispatcher
 executor = fork-join-executor
 fork-join-executor {
   # Min number of threads to cap factor-based parallelism number to
   parallelism-min = 8
   # Parallelism (threads) ... ceil(available processors * factor)
   parallelism-factor = 2.0
   # Max number of threads to cap factor-based parallelism number to
   parallelism-max = 120
 }
 throughput = 200
   }

 I'm calling my actors using a router. 

   val actor = context.actorOf(Props(new MyActor(pubRedisClient, 
 pubChanName)).withRouter(SmallestMailboxRouter(nrOfInstances = 
 20)).withDispatcher(akka.my-dispatcher), name = my-analyzer-router)

 What I cannot understand is that all threads in my dispatching go into 
 the blocking state at the same time (please see the screenshots). Looks 
 like this is happening consistently. My intuition is if there is a blocking 
 piece of code then it's very strange that the blocking piece is being 
 executed at the exact same time again and again. 

 Any idea why this would be happening? 

 Thanks
 -Soumya







-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Save 2 snapshots at the same time?

2014-11-24 Thread Soumya Simanta


On Monday, November 24, 2014 3:29:33 AM UTC-5, Konrad Malawski wrote:


 On Mon, Nov 24, 2014 at 5:50 AM, Soumya Simanta soumya@gmail.com 
 javascript: wrote:

 Also, doesn't snapshotting every message effectively means now your 
 snapshot is your log/journal ? 
 Please correct me if that is not correct.


 That's more of a naming thing I'd say, but yes.


So in essence you can just read the latest message from the log/journal and 
there won't be a need to keep a snapshot. Correct?
 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka Streams documentation and examples

2014-11-24 Thread Soumya Simanta
Can someone point to documentation and examples for Akka Streams. 

Thanks
-Soumya

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Streams documentation and examples

2014-11-24 Thread Soumya Simanta
I tried this, but cannot find anything there. 
http://doc.akka.io/docs/akka-stream-and-http-experimental/0.11/scala.html?_ga=1.67564625.1938417986.1393726085



On Tuesday, November 25, 2014 1:09:46 AM UTC-5, Piyush Mishra wrote:

 Visit akka.io

 Piyush Mishra
 *Blog* https://www.linkedin.com/in/piyush1989 | *LinkedIn 
 https://www.linkedin.com/in/piyush1989*

 Skype : piyush.mishra275
 Hangout :piyushmishra889
 Mobile : +91-8860876875

 On Tue, Nov 25, 2014 at 11:38 AM, Soumya Simanta soumya@gmail.com 
 javascript: wrote:

 Can someone point to documentation and examples for Akka Streams. 

 Thanks
 -Soumya

  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Save 2 snapshots at the same time?

2014-11-23 Thread Soumya Simanta
Also, doesn't snapshotting every message effectively means now your 
snapshot is your log/journal ? 
Please correct me if that is not correct.

On Sunday, November 23, 2014 1:42:07 PM UTC-5, Konrad Malawski wrote:

 Hello Karthik,
 first things first - if you need to snapshot for every message this is 
 very fishy / suspicious.
 Snapshots should be used to fasten recovery times, guard against data 
 corruption etc - not be core of a design.
 Not looking at the impl, but I'd say it's perfectly reasonable for the 
 last wins to apply in this scenario you've outlined.

 Is there a good reason you need to snapshot for every message?

 On Fri, Nov 21, 2014 at 8:14 AM, Karthik Chandraraj ckart...@gmail.com 
 javascript: wrote:

 Hi,

 In my code, I am saving snapshots frequently (For every message received 
 by the Actor, to achieve durability)
 I was going thru the LocalSnapshotStore code and found that snapshots are 
 stored with current_time_millisec as filename. 
 What if my code saves 2 snapshot at the same millisec? Will I loose the 
 first data?

 Thanks,
 C.Karthik

 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 Konrad 'ktoso' Malawski
 hAkker @ Typesafe
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Using Akka Persistence with another KV store

2014-11-20 Thread Soumya Simanta
My understanding is Akka Persistence uses LevelDB as the default journal. I 
want to evaluate another KV store that similar to LevelDB. How easy/hard is 
it to replace the LevelDB with the new KV store. The store is written in C. 
I'm assuming Akka persistence calls LevelDB API using a wrapper (JNI?) over 
the native LevelDB interface. Correct? 

Thanks
-Soumya



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Using Akka Persistence with another KV store

2014-11-20 Thread Soumya Simanta
Looks like Akka persistence uses a Java port of LevelDB
(https://github.com/dain/leveldb). 

So now my question what is the recommended way of using the new KV as the 
persistence journal? 

Thanks.


On Thursday, November 20, 2014 10:41:04 AM UTC-5, Soumya Simanta wrote:

 My understanding is Akka Persistence uses LevelDB as the default journal. 
 I want to evaluate another KV store that similar to LevelDB. How easy/hard 
 is it to replace the LevelDB with the new KV store. The store is written in 
 C. I'm assuming Akka persistence calls LevelDB API using a wrapper (JNI?) 
 over the native LevelDB interface. Correct? 

 Thanks
 -Soumya





-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Aeron as Akka's transport layer ?

2014-11-18 Thread Soumya Simanta
Konrad, That's very reassuring to hear. If the claims made by Aeron are 
true it can be a game changer. 
Imagine an super faster transport layer, combined with an efficient 
serialization format can make Akka even more compelling :-) 

It would be really nice to get an idea of the timeline. 

Also ideas about how it can be integrated into a current system would be 
really helpful. 

Thanks again ! 

-Soumya




On Tuesday, November 18, 2014 5:34:40 AM UTC-5, Akka Team wrote:

 Definitely, it would be a great transport layer!
 No worries, we are looking at it very closely – now the only remaining 
 question is timelines... ;-)

 -- 
 Konrad `ktoso` Malawski
 Akka Team
 Typesafe - The software stack for applications that scale
 Blog: letitcrash.com
 Twitter: @akkateam
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How to Monitor akka, improving performance

2014-11-15 Thread Soumya Simanta
How are you saving the record in actor3 ? 
How are you passing the messages between the actors ? I mean do you have 
any code waits in the receive of any of the actors? 
Can you see how frequently is your GC invoked and how it changes with the 
size of the message? 



On Thursday, November 13, 2014 12:59:07 AM UTC-5, Gaurav Sharma wrote:

 Hi,
 I have a single node system where I'm having three actors. This node is 
 running on a single port. I'm simply passing data and my message flow is:

 actor1 - actor2 - actor3

 Now actor actor3 is created within actor 1. Its a flow where I send a 
 record from actor1 to actor2 and then it comes to actor3 where the record 
 is clubbed and saved. In my current code, I'm not doing anything specific 
 and only passing a string of size 5987 bytes 25k times a second.

 But, what happens is that my system hangs after a while and the same runs 
 fine when the string is very small like demo. I'm inspecting the system 
 using Jvisual VM and I noticed a very strange thing today. Most of my 
 threads where in wait situation, the socket.accept took most of the time.

 So, I need some help in understanding the image below - 


 https://lh5.googleusercontent.com/-pn9ILkRLSuw/VGRGG3vNTPI/Bo4/DRyTqq9gT9w/s1600/Screenshot%2Bfrom%2B2014-11-13%2B10%3A31%3A26.png

 1. What is the ClusterSystem-akka.actor.default.dispatcher-4 or other 
 number - I understand that it is related to my cluster (ClusterSystem) but 
 not clear what it represents over here as there are multiple entries for it.
 2. In the above image most of the dispatchers are in yellow color, so does 
 that mean that all the dispatchers are waiting? But, in my code I'm just 
 passing message from one actor to other then what are they waiting for?


 Could remoting be a hit over here? But, I'm using a single actor system 
 for all the three actors...





-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Per-Request Actor

2014-10-31 Thread Soumya Simanta
What kind of requests are you talking about ? HTTP or an message to Actor's 
receive method ? 

On Friday, October 31, 2014 12:25:22 PM UTC-4, rpr...@sprypoint.com wrote:

 I'm just wondering if there's anything in Akka (post request hook?) or 
 Scala pattern matching that would allow me to create a Per-Request Actor 
 type that would call context.stop(self) after handling any message?


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Best way to integrate akka into a legacy/existing app

2014-10-31 Thread Soumya Simanta
I personally feel it's better to isolate your new and legacy application 
into two separate JVMs and connect them using REST and pub/sub endpoints. 
Of course this could be a deal breaker if you cannot afford the overhead of 
the REST endpoints. Spray (which is based on Akka) is the ideal choice for 
this IMO. 

This will not only allow you to do parallel development (once REST 
endpoints are defined) but also you can start to move functionality from 
the legacy system to the new system incrementally. This will also ensure 
that you can isolate bugs, failure and performance issues independently and 
measure and demonstrate the value of introducing a new technology into your 
existing system. 

One of the best things about the Akka/Spray/Play is that they are 
container-less and can be used as a library into a system. 

I applied this system approach to migrate a system that was written in 
Java/Storm/Jetty to Scala/Akka/Spray/Play incrementally with great success. 

HTH
-Soumya





On Thursday, October 30, 2014 6:39:36 PM UTC-4, fanf wrote:

 Hello everybody, 

 I want to add Akka to an existing scala web application (not play), and 
 I'm wondering how it could be best achieve. The goal for now is not to 
 build a fully distributed and fault tolerant system, but to introduce 
 Akka at some point in the application and let its use grow, allowing to 
 set clear bound in that application and define subsystems. 

 A nice and solution would be to  have a new application in Akka side by 
 side with the legacy one, providing new services or REST endpoint or 
 something. 
 But I prefer to not investigate that solution for now and try to 
 understand how Akka can be incorporated into an existing app. I'm 
 particulary intersted into knowing how 1/ Akka boot in that case and 2/ 
 how to deals with the bound between actors and legacy code. 

 For 1/, http://doc.akka.io/docs/akka/1.3.1/scala/http.html seems to be 
 the correct resources. Is there other example or documentation on the 
 subject ? (I didn't find many other intersting one, any help would be 
 appreciated). 

 For 2/, let me start by saying that I have two places where I want to 
 use akka at first: some kind of batch processes, which are more or less 
 standalone (and so, not very interesting), and repository, or at least 
 the proxy code between backend data storage base and the remaining code. 
 So, at one point, I imagine having an actor at the place of the existing 
 repository service, accepting messages in place of method calls. But I 
 don't see how to do that. 

 I thought it could be linked to 
 http://doc.akka.io/docs/akka/2.3.6/scala/typed-actors.html. Is it 
 correct ? If so, is there any ressources demoing a more complexe 
 integration ? Because I'm not sure how I implement my actor subsystem 
 from the typed actor boundaries with the legacy app. 

 Thanks for any help or pointer to ressources ! 

 -- 
 Francois ARMAND 
 http://rudder-project.org 
 http://www.normation.com 



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Version of akka-quartz scheduler for akkaVersion = 2.3.6

2014-09-17 Thread Soumya Simanta
I'm currently using Akka 2.3.6 and would like to use the Quartz Scheduler. 
https://github.com/typesafehub/akka-quartz-scheduler

The github page says: 

Current Built against:

* Akka 2.0.x
* Akka 2.1.x
* Akka 2.2.x

Is there a version of Quartz-scheduler that works with Akka 2.3.6 ? 


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Multiple Futures inside Actor's receive

2014-08-07 Thread Soumya Simanta
Michael, 

Thank you for your response. 
Here is what I'm struggling with. 

In order to use pipeTo pattern I'll need access to the transaction  (tran )and 
the FIRST Future (zf) in the actor where I'm piping the Future to because 
the SECOND Future depends on the value (z) of FIRST. How can I do that ? 

//SECOND Future, depends on result of FIRST Future 
  val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z) 



On Thursday, August 7, 2014 3:51:17 AM UTC-4, Michael Pisula wrote:

 Sorry, still early. Missed the part where you said that you don't want to 
 use PipeTo because of the transaction. Not sure if that is a problem at all 
 though. From what I see you use the transaction to make sure nothing 
 happens with the values between your zcard and zrange calls, afterwards its 
 only modification of the internal state. If you just pipe that to a 
 separate actor containing the state I would expect things to work fine. Or 
 do you want the transaction to ensure that update to the internal state and 
 synced with the reads from redis. Then I am not sure that it will work like 
 you implemented it.

 Cheers

 Am Donnerstag, 7. August 2014 09:04:13 UTC+2 schrieb Michael Pisula:

 Instead of mutating state from within the future I would use the pipeTo 
 pattern. Using pipeTo you can send the result of a future to an actor (e.g. 
 to self). There you can safely change state, as you are in 
 single-threaded-illusion-land again...

 HTH

 Cheers,
 Michael

 Am Donnerstag, 7. August 2014 07:25:05 UTC+2 schrieb Soumya Simanta:

 I'm cross posting this here for better coverage. 


 http://stackoverflow.com/questions/25174504/multiple-future-calls-in-an-actors-receive-method


 I'm trying to make two external calls (to a Redis database) inside an 
 Actor's receive method. Both calls return a Future and I need the 
 result of the first Future inside the second. I'm wrapping both calls 
 inside a Redis transaction to avoid anyone else from modifying the value in 
 the database while I'm reading it.

 The internal state of the actor is updated based on the value of the 
 second Future.

 Here is what my current code looks like which I is incorrect because I'm 
 updating the internal state of the actor inside a Future.onComplete
  callback.

 I cannot use the PipeTo pattern because I need both both Future have to 
 be in a transaction. If I use Await for the first Future then my 
 receive method will *block*. Any idea how to fix this ?

 My *second question* is related to how I'm using Futures. Is this usage 
 of Futures below correct? Is there a better way of dealing with 
 multiple Futures in general? Imagine if there were 3 or 4 Future each 
 depending on the previous one.

 import akka.actor.{Props, ActorLogging, Actor}import 
 akka.util.ByteStringimport redis.RedisClient
 import scala.concurrent.Futureimport scala.util.{Failure, Success}

 object GetSubscriptionsDemo extends App {
   val akkaSystem = akka.actor.ActorSystem(redis-demo)
   val actor = akkaSystem.actorOf(Props(new SimpleRedisActor(localhost, 
 dummyzset)), name = simpleactor)
   actor ! UpdateState}
 case object UpdateState
 class SimpleRedisActor(ip: String, key: String) extends Actor with 
 ActorLogging {

   //mutable state that is updated on a periodic basis
   var mutableState: Set[String] = Set.empty

   //required by Future
   implicit val ctx = context dispatcher

   var rClient = RedisClient(ip)(context.system)

   def receive = {
 case UpdateState = {
   log.info(Start of UpdateState ...)

   val tran = rClient.transaction()

   val zf: Future[Long] = tran.zcard(key)  //FIRST Future 
   zf.onComplete {

 case Success(z) = {
   //SECOND Future, depends on result of FIRST Future 
   val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z) 
   rf.onComplete {
 case Success(x) = {
   //convert ByteString to UTF8 String
   val v = x.map(_.utf8String)
   log.info(sUpdating state with $v )
   //update actor's internal state inside callback for a Future
   //IS THIS CORRECT ?
   mutableState ++ v
 }
 case Failure(e) = {
   log.warning(ZRANGE future failed ..., e)
 }
   }
 }
 case Failure(f) = log.warning(ZCARD future failed ..., f)
   }
   tran.exec()

 }
   }
 }

 The compiles but when I run it gets struck.

 2014-08-07  INFO [redis-demo-akka.actor.default-dispatcher-3] 
 a.e.s.Slf4jLogger - Slf4jLogger started2014-08-07 04:38:35.106UTC INFO 
 [redis-demo-akka.actor.default-dispatcher-3] e.c.s.e.a.g.SimpleRedisActor - 
 Start of UpdateState ...2014-08-07 04:38:35.134UTC INFO 
 [redis-demo-akka.actor.default-dispatcher-8span class=pun st

 ...



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https

[akka-user] Multiple Futures inside Actor's receive

2014-08-06 Thread Soumya Simanta


I'm cross posting this here for better coverage. 

http://stackoverflow.com/questions/25174504/multiple-future-calls-in-an-actors-receive-method


I'm trying to make two external calls (to a Redis database) inside an 
Actor's receive method. Both calls return a Future and I need the result of 
the first Future inside the second. I'm wrapping both calls inside a Redis 
transaction to avoid anyone else from modifying the value in the database 
while I'm reading it.

The internal state of the actor is updated based on the value of the second 
Future.

Here is what my current code looks like which I is incorrect because I'm 
updating the internal state of the actor inside a Future.onComplete
 callback.

I cannot use the PipeTo pattern because I need both both Future have to be 
in a transaction. If I use Await for the first Future then my receive 
method will *block*. Any idea how to fix this ?

My *second question* is related to how I'm using Futures. Is this usage of 
Futures below correct? Is there a better way of dealing with multiple 
Futures in general? Imagine if there were 3 or 4 Future each depending on 
the previous one.

import akka.actor.{Props, ActorLogging, Actor}import akka.util.ByteStringimport 
redis.RedisClient
import scala.concurrent.Futureimport scala.util.{Failure, Success}

object GetSubscriptionsDemo extends App {
  val akkaSystem = akka.actor.ActorSystem(redis-demo)
  val actor = akkaSystem.actorOf(Props(new SimpleRedisActor(localhost, 
dummyzset)), name = simpleactor)
  actor ! UpdateState}
case object UpdateState
class SimpleRedisActor(ip: String, key: String) extends Actor with ActorLogging 
{

  //mutable state that is updated on a periodic basis
  var mutableState: Set[String] = Set.empty

  //required by Future
  implicit val ctx = context dispatcher

  var rClient = RedisClient(ip)(context.system)

  def receive = {
case UpdateState = {
  log.info(Start of UpdateState ...)

  val tran = rClient.transaction()

  val zf: Future[Long] = tran.zcard(key)  //FIRST Future 
  zf.onComplete {

case Success(z) = {
  //SECOND Future, depends on result of FIRST Future 
  val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z) 
  rf.onComplete {
case Success(x) = {
  //convert ByteString to UTF8 String
  val v = x.map(_.utf8String)
  log.info(sUpdating state with $v )
  //update actor's internal state inside callback for a Future
  //IS THIS CORRECT ?
  mutableState ++ v
}
case Failure(e) = {
  log.warning(ZRANGE future failed ..., e)
}
  }
}
case Failure(f) = log.warning(ZCARD future failed ..., f)
  }
  tran.exec()

}
  }
}

The compiles but when I run it gets struck.

2014-08-07  INFO [redis-demo-akka.actor.default-dispatcher-3] a.e.s.Slf4jLogger 
- Slf4jLogger started2014-08-07 04:38:35.106UTC INFO 
[redis-demo-akka.actor.default-dispatcher-3] e.c.s.e.a.g.SimpleRedisActor - 
Start of UpdateState ...2014-08-07 04:38:35.134UTC INFO 
[redis-demo-akka.actor.default-dispatcher-8] r.a.RedisClientActor - Connect to 
localhost/127.0.0.1:63792014-08-07 04:38:35.172UTC INFO 
[redis-demo-akka.actor.default-dispatcher-4] r.a.R

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] A few simple questions about versions of Akka, Akka Persistence and Cassandra journal driver

2014-08-03 Thread Soumya Simanta
I want to get started with Akka persistence. 

Do I need to upgrade to akka 2.4 to be able to use the latest Akka 
persistence 

com.typesafe.akka %% akka-persistence-experimental % 2.4-SNAPSHOT


OR will it work with akka 2.3.x as well ? 

Also, I want to use a Cassandra journal using 
https://github.com/krasserm/akka-persistence-cassandra

The build.sbt 
https://github.com/krasserm/akka-persistence-cassandra/blob/master/build.sbt 
uses 
akk-persistence-experimental 2.3.4

  com.typesafe.akka %% akka-persistence-experimental % 2.3.4,


Does this mean that the Cassandra driver won't work with the latest version of 
akka-persistence ? 

Thanks
-Soumya

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Using Akka to access multiple databases

2014-08-03 Thread Soumya Simanta



 It will be our first time integrating a Scala module in Java. We hope to 
 figure it out if it becomes necessary. 


I would be interested to know the results as well. I'm sure that many 
others here can tell how easy/difficult this is. 
 

 I am not able to find this new redis scala package... do you happen to 
 have a link for this release?

https://github.com/Livestream/scredis
NOTE: there are many other Redis drivers on github. I've experience with 
rediscala only. The benchmarks and features of scredis look promising. I 
would investigate/prototype a few before selecting one if you don't have 
the flexibility to change your mind later. 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] A few simple questions about versions of Akka, Akka Persistence and Cassandra journal driver

2014-08-03 Thread Soumya Simanta
Thanks Konard. I'll stick with 2.3.4 for now and upgrade once I have more 
experience and need in the future. 

On Sunday, August 3, 2014 12:06:34 PM UTC-4, Konrad Malawski wrote:

 Hello Soumya,
 If you’re just getting started, stick to the released versions - such as 
 2.3.4 :-)

 Also, persistence is developed and updated as part of the 2.3 release 
 cycle, new features will be available within 2.3 - no need to rush to 2.4 
 just yet :-)


 While we do publish (timestamped) snapshots to repo.akka.io I don’t see 
 why you would need to use it unless specifically testing against some fix 
 introduced in HEAD.

 The cassandra plugin (and anyone who implemented the 2.3.4 API) is binary 
 compatible with 2.3.4 and source compatible with 2.3.x.
 This is only because persistence is an experimental module, and under 
 active development (and adjustments based on community feedback),
 normally we do guarantee 2.3.x modules to be binary compatible within 2.3, 
 but not with 2.4.
 ​


 On Sun, Aug 3, 2014 at 2:18 PM, Soumya Simanta soumya@gmail.com 
 javascript: wrote:

 I want to get started with Akka persistence. 

 Do I need to upgrade to akka 2.4 to be able to use the latest Akka 
 persistence 

 com.typesafe.akka %% akka-persistence-experimental % 2.4-SNAPSHOT


 OR will it work with akka 2.3.x as well ? 

 Also, I want to use a Cassandra journal using 
 https://github.com/krasserm/akka-persistence-cassandra

 The build.sbt 
 https://github.com/krasserm/akka-persistence-cassandra/blob/master/build.sbt
  uses 
 akk-persistence-experimental 2.3.4

   com.typesafe.akka %% akka-persistence-experimental % 2.3.4,




 Does this mean that the Cassandra driver won't work with the latest version 
 of akka-persistence ? 



 Thanks
 -Soumya



  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 Konrad 'ktoso' Malawski
 hAkker @ Typesafe

 http://typesafe.com
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Using Akka to access multiple databases

2014-07-31 Thread Soumya Simanta
Please look this before you decide to roll you own. I've being using this 
for a while and it stable, async and works well. 
https://github.com/etaty/rediscala 

there are other open source Redis Akka based drivers available as well. 

On Wednesday, July 30, 2014 11:48:18 AM UTC-4, Luis Medina wrote:

 Hi everyone,

 My company is planning to use Akka for a new feature that we're working on 
 and we want to run our design by a few set of eyes from the Akka community 
 just to make sure that what we're doing makes sense and also to get some 
 feedback and see if perhaps there are other ways of doing things.

 The new service that we're building will involve accessing 3 different 
 Redis databases to both persist and retrieve data. We can call these 3 
 databases:

 TopicDB - holds topics that are added/removed.
 StatusDB - holds the current status of a topic. Every topic is added to 
 this db for tracking its status. Topics are added/updated but never removed.
 RequestDB - holds a request queue. Every topic generates a request in 
 this db, that can also be removed.

 Basically additions or removals to the TopicDB are driven by an external 
 application which must then direct the updates to the other 2 dbs as 
 appropriate. 

 We're planning on having a setup such that for each db there will be what 
 we call a RedisSyncActor which, as the name implies, will sync up with the 
 particular database that it corresponds to. In order to do this syncing, 
 each of these RedisSyncActors will have at most 2 children, a 
 RedisPersisterActor and a RedisListenerActor which will persist data into 
 Redis and will receive data from Redis respectively.

 https://lh4.googleusercontent.com/-coQA-XEkgJw/U9kTQiTcPMI/AI4/jB4mhcMNehw/s1600/image1.png


 Now, earlier I said at most because in reality, every Redis database will 
 not necessarily make use of both of each children. Based on our 
 requirements, the setup will look like this where the RequestDB is the only 
 one with both a RedisPersisterActor and a RedisListenerActor while the 
 TopicDB only has a RedisListenerActor and the StatusDB only has a 
 RedisPersisterActor.

 https://lh3.googleusercontent.com/-AyDu7Pf7w1Q/U9kTWDp9BBI/AJA/s7aIYLgDmco/s1600/image2.png


 Now, in terms of the data flow and interactions between the actors, this 
 will look something like this:

 https://lh5.googleusercontent.com/-eIJNHIc3EEs/U9kTaxgILsI/AJI/50Z1o6ABHgs/s1600/image3.png


 1. Whenever the TopicDB is changed by adding or removing what we call 
 topics, the TopicRedisListenerActor will pick up on these changes by 
 using the pub/sub feature that Redis provides. 

 2. Once the TopicRedisListenerActor receives these changes it will send 
 them in 2 directions:

 a. Regardless of what type of topic change it received (ie. a topic that 
 was added, a topic that was removed, etc), the TopicRedisListenerActor will 
 send it back to its TopicRedisSyncActor parent who will in turn send it off 
 to the StatusRedisSyncActor that handles the StatusDB. This 
 StatusRedisSyncActor will then forward the topic changes to its 
 StatusRedisPersisterActor child so that the change can be persisted to the 
 StatusDB.

  

 b. Secondly, if the topic change indicates that a topic was removed from 
 the TopicDB, the TopicRedisListenerActor will send a RemoveTopic message 
 to the TopicRedisSyncActor. This TopicRedisSyncActor will then forward the 
 change to the RequestRedisSyncActor so that the RequestDB can remove the 
 topic as well.


 3. Depending on whether or not StatusDB already knows about the id of the 
 topic changes that it receives, this will cause particular updates to 
 happen in the database. If the topic change reflected the creation of a new 
 topic, then a new entry will be added to the StatusDB and the 
 StatusRedisPersisterActor will send back a reply to its 
 StatusRedisSyncActor parent containing the topic and informing it of this 
 action. 

 4. If the StatusRedisSyncActor receives a notification from its 
 StatusRedisPersisterActor that a new entry was added to the StatusDB then 
 it will send off this topic to ConverterActor which will convert the 
 topic into what we call a request. 

 5. Afterwards the ConverterActor will send this newly converted request 
 off to the RequestRedisSyncActor. This RequestRedisSyncActor will then send 
 the request to its RequestRedisPersisterActor child so that it can be 
 persisted into the RequestDB. 

 6. The RequestRedisSyncActor also has a RequestRedisListenerActor which 
 listens for changes that occur in the RequestDB. In this case, it will pick 
 up the fact that a new request was added to the RequestDB by the 
 RequestRedisPersisterActor in step 5 and it will send if off to other parts 
 of the service for further processing.

 As you can see there is quite a lot going on so we want to be sure we're 
 on the right track. 


 A question that we had is: 

  Are we creating too many Redis-related actors, and is 

[akka-user] Akka persistence with Neo4J or any other graph-centric store

2014-07-26 Thread Soumya Simanta
I'm new to Akka persistence. I like the concept and would like to evaluate 
it for a new application I want to build. 

I've a stream of data coming in. My events will be derived from stream 
elements in a window by time or number of elements. These events are best 
represented as a graph. Any ideas about what's the best to way to store and 
query these events into a graph DB using Akka persistence. Has anyone else 
any experience with a similar use case? Currently I'm looking at Neo4J but 
I'm open to other stores as well. 

Thanks
-Soumya


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: actor making long http call

2014-07-19 Thread Soumya Simanta
You can have a parent actor that spawns multiple child actors (in this case 
3). Each of this child actor is responsible for send the requests to the 
external service and waiting for the result. Ideally I would recommend 
using Spray client to handle this. Can the server side of the service 
maintain stream state ? (i.e., can it resume in case of an error, network 
disconnect etc). 

Now in the parent actor you can have two states ( def canHandleMore : 
Receive and def noMoreRequests: Receive) and keep a count of how many child 
actors are created from the canHandleMore state. Change the state of the 
parent actor by using become noMoreRequests when you reach your threshold 
(in this case 3). The workers should similarly send a message back to the 
parent when they are done and the parent can go back to  become 
canHandleMore. You can return a message back to the client from the 
noMoreRequest state.  




On Friday, July 18, 2014 2:54:45 AM UTC-4, Greg Flanagan wrote:

 I have an actor that makes an http call that can take a long time to 
 complete (i.e. 10 - 30 minutes). I only want to be hitting the service at 
 most 3 at once so I don't want the actors to consume more messages until 
 the current call is finished. I've got it all working great using the work 
 pulling pattern with three worker nodes. My question, or really concern is, 
 is it a good idea to keep an http connection open for so long? are there 
 any implications for doing so? what kind of things should I look out for? 
 Since I'm using NIO for the http call I shouldn't be using up a thread most 
 of the time. I'm use to http calls finishing on the order or milliseconds 
 not minutes.

 Cheers,
 Greg


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Correct way to model a Redis subscriber as Akka actor(s)

2014-03-23 Thread Soumya Simanta

I'm crosspointing it here to get wider coverage. 

http://stackoverflow.com/questions/22569664/correct-way-to-model-a-subscriber-as-an-akka-actor

I'm planning to reengineer an existing system to use Akka, Play and 
Websockets.

My current system is based on Jetty and Websockets.

I've a fast stream of messages that are published into a Redis channel. In 
my web app layer I subscribe to these messages using a Jedis subscriber and 
then push those messages to a Websocket which are then displayed on a 
browser.

I want to make the shift two two primary reason - a) better and simpler 
fault tolerance due to use of Actors b) the ability to connect to multiple 
streams using different actors

In my current design I've a supervisor that creates a new child actor for 
every new channel. The child actor then subscribes to a Redis channel. My 
question what's the best way to push the messages (received from the Redis 
channel) to a Play Websocket ?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.