[akka-user] Why akka choose protobuf internally

2017-03-17 Thread Dai Yinhua
Is there any special consideration that akka choose to use protobuf for 
internal serialization/deserialization?

Why use schema-based serialization? I am evaluating the serialization 
library for messages between akka actors.
I think the schema-based serialization is not convenient as non 
schema-based serialization like akka-kryo-serialization, which don't need 
any additional code but also fast.

Can you give me some hints, thank you.


-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Why akka choose protobuf internally

2017-03-17 Thread Akka Team
If you use a tool that automagically makes protocol out of classes it is
really hard to deal with wire compatibility, giving guarantees that old
messages can still be deserialized, for example on rolling upgrades, or
when stored (akka-persistence) for a longer period of time. Both these
aspects are important to all Akka specific messages. In addition to that
the schema allows us to do optimizations we could not do without it,
default cases by sending no data at all when possible for example.

If you have a use case where none of those things matters, kryo or any
other schema-less can be perfectly fine.
Personally I still prefer explicit protocols over implicit ones, but that
might be a matter of taste.

--
Johan
Akka Team

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka HTTP path matching fundamentally broken? Surely not?

2017-03-17 Thread Alan Burlison
I'm sure I must be missing something here because I can't believe path 
matching in Akka HTTP could be broken in the way it seems to be, because 
it would be unusable for anything other than toy applications if it was:


I'm composing route handing from a top-level handler and sub-handlers 
like this:


pathPrefix("root") {
  concat(
 pathPrefix("service1") { service1.route },
 pathPrefix("service2") { service2.route }
  )
}

where service1.route etc returns the sub-route for the associated 
sub-tree. That works fine with a path of say "/root/service1", but it 
*also* matches "/rootnotroot/service1", because pathPrefix() just 
matches any arbitrary string prefix and not a full path segment. And if 
I use path() instead of pathPrefix() it tries to match the entire 
remaining path. What I'm looking for is something along the lines of 
segment() where that fully matches just the next path segment and leaves 
the remaining path to be matched by inner routes, but there doesn't seem 
to be such a thing.


What am I missing?

Thanks,

--
Alan Burlison
--

--

 Read the docs: http://akka.io/docs/
 Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka User List" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Cassandra Time Stamp Problem and How Akka can help?

2017-03-17 Thread kant kodali
Do I need to specify how many nodes or shards I want to distribute to ? 
Node can go up and down right.. Can the Akka cluster discover how many 
nodes are available at any given time? Also, Why should I manually down the 
node? I know that there is a failure detector so if the Akka cluster 
"thinks" a node is dead then why cant it simply distribute that region to 
other actors?

Thanks!


On Thursday, March 16, 2017 at 9:46:10 AM UTC-7, Justin du coeur wrote:
>
> Look at it this way -- typically, you're generating events about some 
> *thing*, which corresponds to the key you're using in Cassandra.  That's 
> the "entity" I'm talking about, and typically it would have a single Actor 
> in Akka, to which all events are being sent.  That Actor mediates all the 
> Cassandra reads and writes about that entity without thread contention, so 
> you don't have to worry about race conditions.  If the entity isn't being 
> used continually, you can allow it to passivate (go to sleep and stop using 
> memory) after a timeout, and have it automatically revive (based on the 
> event history) when it is next needed.
>
> And yes, Akka Cluster Sharding is smart about dealing with it when the 
> node dies -- so long as you "down" the node (telling Akka that yes, this 
> node is considered dead), it will move the shard to another node as 
> necessary.  It's a pretty mature system for dealing with this sort of stuff.
>
> I don't have a straightforward example myself (my system uses all of this, 
> but is fairly complex) -- anybody else have a good example to point to?
>
> On Wed, Mar 15, 2017 at 7:29 PM, kant kodali  > wrote:
>
>> What is each Entity if I may ask? By Entity you mean Actor? If I shard 
>> messages across group of actors or actor systems through some user 
>> specified function and say an actor or actor system(a node) dies then Does 
>> Akka redirect that shard to other actors (more like rebalancing) ? Any 
>> simple example somewhere I can take a look please?
>>
>> Thanks!
>>
>>
>>
>>
>> On Tuesday, March 14, 2017 at 4:52:43 AM UTC-7, kant kodali wrote:
>>>
>>> Hi All,
>>>
>>> I have Kafka as my live streaming source of data (This data isn't 
>>>  really events but rather just messages with a state) and I want to insert 
>>> this data into Cassandra but I have the following problem.
>>>
>>> Cassandra uses Last Write Wins Strategy using timestamps to resolve 
>>> conflicting writes. 
>>>
>>> By default, Cassandra enables server side timestamps and they are 
>>> monotonic per node. other words two nodes can produce the same timestamp 
>>> (although not often). So if they are two writes that go to two different 
>>> coordinator nodes and are trying to update the same Cassandra partition one 
>>> write will overwrite the other (we cannot deterministically say which one). 
>>> But from the user perspective it would look like both writes were 
>>> successful although we lost the state of one write request (Widely known 
>>> word to describe this anomaly is called "LOST UPDATES").  So if one doesn't 
>>> want this to happen Cassandra recommends to use client side timestamps but 
>>> we can run into the same problem in the following scenario.
>>>
>>> Client side Cassandra timestamps are monotonic by each client (By client 
>>> I mean think of it as a process that uses Cassandra driver API) so if one 
>>> has multiple processes which each of them having Cassandra driver api then 
>>> they can generate a same time stamp (although not often) and are trying to 
>>> update the same Cassandra partition then we will run into the same problem 
>>> as above. And multiple processes talking to Cassandra is very common in the 
>>> industry. In my case these multiple processes will be Kafka Consumers which 
>>> will consume data from Kafka and insert it into Cassandra. 
>>>
>>> If one of the two writes that are contending fails and other succeeds 
>>> such that a failed write can automatically retry using some mechanism in 
>>> Akka (then it will be an acceptable solution) but how do we do that?
>>>
>>> I somehow think there might be a nice reactive pattern using Akka 
>>> whether it is sharding or something else can help me solve this problem?
>>>
>>> Disclaimer: I am new to Akka and trying to put in lot of effort to learn 
>>> as quickly as possible so I will be open and thankful to any new ideas on 
>>> how to solve this problem in a scalable way as possible?
>>>
>>> Thanks,
>>> kant
>>>
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this grou

[akka-user] Re: Akka HTTP path matching fundamentally broken? Surely not?

2017-03-17 Thread Alan Burlison

pathPrefix("root") {


I can bodge around this with:

pathPrefix("^root$".r)

but that's unspeakably vile.

--
Alan Burlison
--

--

 Read the docs: http://akka.io/docs/
 Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka User List" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka HTTP path matching fundamentally broken? Surely not?

2017-03-17 Thread Akka Team
Did you read the docs about the various path directives and how they differ?
http://doc.akka.io/docs/akka-http/10.0.4/scala/http/routing-dsl/directives/path-directives/index.html

--
Johan
Akka Team


On Fri, Mar 17, 2017 at 12:47 PM, Alan Burlison 
wrote:

> pathPrefix("root") {
>>
>
> I can bodge around this with:
>
> pathPrefix("^root$".r)
>
> but that's unspeakably vile.
>
>
> --
> Alan Burlison
> --
>
> --
>
>>  Read the docs: http://akka.io/docs/
>>>  Check the FAQ: http://doc.akka.io/docs/akka/c
>>> urrent/additional/faq.html
>>>  Search the archives: https://groups.google.com/grou
>>> p/akka-user
>>>
>> --- You received this message because you are subscribed to the
> Google Groups "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka HTTP path matching fundamentally broken? Surely not?

2017-03-17 Thread Alan Burlison

On 17/03/2017 12:12, Akka Team wrote:


Did you read the docs about the various path directives and how they differ?
http://doc.akka.io/docs/akka-http/10.0.4/scala/http/routing-dsl/directives/path-directives/index.html


Yes, over and over, because as I said I was sure I must be missing 
something.


I actually think it might be a bug - I'm in the middle of trying to 
figure out exactly where but it looks like the URI handling under the 
DSL splits the URI into segments and then matches the routing directives 
against it in turn. In the case of a string it looks like it is 
comparing the string to the path segment with startsWith instead of 
equals, so it is checking if string is a _prefix_ of the next segment 
rather than the _entirety_ of the next path segment.


If someone wanted to match the next segment against a string prefix then 
they could use a RE, e.g. "foo(.*)".r would match "foobar", "foobaz" etc 
and extract "bar" and "baz".


Currently a string of "foo" will *also* match "foobar", "foobaz" but 
won't extract the "bar" or "baz" suffix, which seems almost certainly 
not what you'd want.


--
Alan Burlison
--

--

 Read the docs: http://akka.io/docs/
 Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka User List" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka in OSGi

2017-03-17 Thread Marc Schlegel
The error is caused by the missing package* sun.misc*

After adding the following line to my bndlaunch-configuration akka can start
-runsystempackages: sun.misc

Background: by default, the OSGi-framework (in my case Felix) will only 
load public-api packages from the JRE and "sun.misc" is not part of it.

Am Donnerstag, 16. März 2017 10:32:15 UTC+1 schrieb Marc Schlegel:
>
> Hello everyone
>
> I am trying to setup Akka within an OSGi environment and followed the 
> documentation 
> [1].
>
> Though, first I was trying to setup a ActorSystem from a 
> Declarative-Service component rather than an Activator, I realized I should 
> first start with the documented approach.
> Unfortunately I am running in following Exception during my 
> bundle-activation
>
> ! Failed to start bundle osgi.akka.actorsystem.demo-0.0.0, exception 
> activator error null from: akka.dispatch.AbstractNodeQueue:#181
>
> My demo 
>  [2] 
> is using a Bnd Workspace instead of Maven or SBT but the generated artifact 
> looks just fine. The bundles metadata is coming from a bnd-file 
> 
>  [3], 
> the running instance is configured in a bndrun 
> -file
>  
> [4]. As you migh see, those files use the Bundle-SymbolicName to reference 
> dependencies. This works within the scope of the Bnd Workspace, the actual 
> external dependencies are specified in a Maven-Repository-Resolver 
> 
>  
> [5]
>
> Here is a list of the bundles which are currently running in my system:
>
> START LEVEL 1
>ID|State  |Level|Name
> 0|Active |0|System Bundle (5.6.2)|5.6.2
> 1|Active |1|Apache Felix Configuration Admin Service 
> (1.8.14)|1.8.14
> 2|Active |1|Apache Felix Gogo Command (1.0.2)|1.0.2
> 3|Active |1|Apache Felix Gogo Runtime (1.0.2)|1.0.2
> 4|Active |1|Apache Felix Gogo Shell (1.0.0)|1.0.0
> 5|Active |1|Scala Standard Library 
> (2.12.1.v20161205-104509-VFINAL-2787b47)|2.12.1.v20161205-104509-VFINAL-2787b47
> 6|Active |1|org.scala-lang.modules.scala-java8-compat 
> (0.8.0)|0.8.0
> 7|Active |1|com.typesafe.config (1.3.0)|1.3.0
> 8|Active |1|akka-actor (2.4.17)|2.4.17
> 9|Active |1|akka-osgi (2.4.17)|2.4.17
>10|Active |1|Apache Felix Declarative Services (2.0.8)|2.0.8
>11|Resolved   |1|osgi.akka.actorsystem.demo (0.0.0)|0.0.0
> g! 
>
> Can someone advice how to figure out whats going on here? The 
> error-message is not telling much.
>
> regards
> Marc
>
> [1] http://doc.akka.io/docs/akka/2.4.17/additional/osgi.html 
> [2] https://github.com/lostiniceland/playground/tree/master/akka-osgi
> [3] 
> https://github.com/lostiniceland/playground/blob/master/akka-osgi/osgi.akka.actorsystem.demo/bnd.bnd
> [4] 
> https://github.com/lostiniceland/playground/blob/master/akka-osgi/osgi.akka.actorsystem.demo/launch.bndrun
> [5] 
> https://github.com/lostiniceland/playground/blob/master/akka-osgi/cnf/central.xml
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka HTTP path matching fundamentally broken? Surely not?

2017-03-17 Thread Alan Burlison
> I actually think it might be a bug - I'm in the middle of trying to figure
> out exactly where

PathMatcher.scala, line 145:

def apply[L: Tuple](prefix: Path, extractions: L): PathMatcher[L] =
  if (prefix.isEmpty) provide(extractions)
  else new PathMatcher[L] {
def apply(path: Path) =
  if (path startsWith prefix) Matched(path dropChars
prefix.charCount, extractions)(ev)
  else Unmatched
  }

I believe "startsWith" should be "==", otherwise it is matching the
prefix of a segment which is in turn the prefix of a path, not a
segment which is a prefix of the path.

>From looking at the examples of how the pathPrefix directive is used,
it commonly takes a series of path matchers with "/" separators, where
segments that need to be matched with no extractions or conditionals
are represented by strings. However strings currently do *not* match
entire path segments, they match *prefixes* of path segments and there
appears to be no way to do exact matching of fixed path segments other
than the regexp hack I outlined in my previous email. You could I
suppose say that the current behaviour is as intended, in which case
I'd suggest it is surprising and unintuitive - if I put a literal
"foo" I expect it to match "foo" and not "foobar". It's even more
surprising because there is a mechanism (REs) that's explicitly for
matching "foobar", "foobaz" and extracting the variable part of the
segment, which you'd almost certainly want to do anyway for subsequent
routing logic.

If I have a match against "foo" followed by a slash, followed by "bar"
and I provide it with a path of "foofoo/bar" and try and synthesise a
useful error message from the failure with extractUnmatchedPath I get
a string "foo/bar", which is confusingly close to the real path of
"/foo/bar". I can imagine the WTF? complaints from users that will
cause...

Even if this *is* working as intended and won't be fixed (which I
personally believe would be a bad choice), would it be possible to add
a matcher DSL item that specifically matched a complete path segment?
There's already 'Segment', could that be extended to take a segment to
fully match against, e.g. 'Segment("foo")' ?

-- 
Alan Burlison
--

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Why akka choose protobuf internally

2017-03-17 Thread Justin du coeur
I have to take issue with "really hard" -- I'm using Kryo for my Akka
Persistence, and it's working well.  It's totally possible to handle schema
evolution with it, and it is *not* rocket science.

That said, I'll agree that it isn't trivial by any means: I put a
significant amount of effort into getting it all working.  I have a
whitepaper in-process
 that details all
the considerations and how I addressed them; I'll talk that up a bit once I
have it finished.  (And I might wind up releasing a tiny library for the
annotations and other details I developed around it.)

Overall, the summary is that, yes, Kryo *is* more effort than Protobuf, and
there's a distinct difference of viewpoint: you have to be annotating the
serialization details *somewhere*, and in the case of Kryo it requires
annotating your serialized classes to a significant degree.  Personally, I
prefer that approach, but there is plenty of room for honest disagreement
here.

(And specifically to Dai, note that Akka Persistence absolutely requires
thinking in a "schema"-ish way -- even when using Kryo, you have to be more
explicit about the schema details, although not in external config files.)

On Fri, Mar 17, 2017 at 6:25 AM, Akka Team  wrote:

> If you use a tool that automagically makes protocol out of classes it is
> really hard to deal with wire compatibility, giving guarantees that old
> messages can still be deserialized, for example on rolling upgrades, or
> when stored (akka-persistence) for a longer period of time. Both these
> aspects are important to all Akka specific messages. In addition to that
> the schema allows us to do optimizations we could not do without it,
> default cases by sending no data at all when possible for example.
>
> If you have a use case where none of those things matters, kryo or any
> other schema-less can be perfectly fine.
> Personally I still prefer explicit protocols over implicit ones, but that
> might be a matter of taste.
>
> --
> Johan
> Akka Team
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Cassandra Time Stamp Problem and How Akka can help?

2017-03-17 Thread Justin du coeur
On Fri, Mar 17, 2017 at 7:29 AM, kant kodali  wrote:

> Do I need to specify how many nodes or shards I want to distribute to ?
>

Not by number, but IIRC you can assign particular roles to nodes, and have
those roles determine what sorts of things get distributed to those nodes.

Note that you *do* implicitly have to pre-decide how many shards to break a
given concept down to.  But a given node typically hosts a number of
shards, and that rebalances dynamically.


> Node can go up and down right.. Can the Akka cluster discover how many
> nodes are available at any given time?
>

That's essentially what clustering does, yes.  There's an underlying gossip
mechanism, so that all nodes have a rough idea of all of the others at any
given time.


> Also, Why should I manually down the node? I know that there is a failure
> detector so if the Akka cluster "thinks" a node is dead then why cant it
> simply distribute that region to other actors?
>

Problem is, there's a lot of judgement call involved in deciding whether a
node is just temporarily unavailable due to a network failure or is
actually down.  Getting this wrong has *serious* consequences, and can lead
to data corruption.  Akka per se doesn't make that decision, although
Lightbend does sell a product named Split Brain Resolver that provides a
fairly sophisticated algorithm to make the decision.

Basically, Akka doesn't say whether it thinks a node is *down*, it just
knows that it is temporarily unavailable.  You have to decide when that
actually means "down".

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka in OSGi

2017-03-17 Thread Robert Wills
Just curious, but is there any particular reason you need to use osgi and
in particular osgi with Akka?   Is it mandated by your employer?  As you
can probably tell there doesn't seem to be much support for osgi+akka in
this forum.

On Fri, Mar 17, 2017 at 1:23 PM, Marc Schlegel  wrote:

> The error is caused by the missing package* sun.misc*
>
> After adding the following line to my bndlaunch-configuration akka can
> start
> -runsystempackages: sun.misc
>
> Background: by default, the OSGi-framework (in my case Felix) will only
> load public-api packages from the JRE and "sun.misc" is not part of it.
>
> Am Donnerstag, 16. März 2017 10:32:15 UTC+1 schrieb Marc Schlegel:
>>
>> Hello everyone
>>
>> I am trying to setup Akka within an OSGi environment and followed the
>> documentation 
>> [1].
>>
>> Though, first I was trying to setup a ActorSystem from a
>> Declarative-Service component rather than an Activator, I realized I should
>> first start with the documented approach.
>> Unfortunately I am running in following Exception during my
>> bundle-activation
>>
>> ! Failed to start bundle osgi.akka.actorsystem.demo-0.0.0, exception
>> activator error null from: akka.dispatch.AbstractNodeQueue:#181
>>
>> My demo
>>  [2]
>> is using a Bnd Workspace instead of Maven or SBT but the generated artifact
>> looks just fine. The bundles metadata is coming from a bnd-file
>> 
>>  [3],
>> the running instance is configured in a bndrun
>> -file
>> [4]. As you migh see, those files use the Bundle-SymbolicName to reference
>> dependencies. This works within the scope of the Bnd Workspace, the actual
>> external dependencies are specified in a Maven-Repository-Resolver
>> 
>> [5]
>>
>> Here is a list of the bundles which are currently running in my system:
>>
>> START LEVEL 1
>>ID|State  |Level|Name
>> 0|Active |0|System Bundle (5.6.2)|5.6.2
>> 1|Active |1|Apache Felix Configuration Admin Service
>> (1.8.14)|1.8.14
>> 2|Active |1|Apache Felix Gogo Command (1.0.2)|1.0.2
>> 3|Active |1|Apache Felix Gogo Runtime (1.0.2)|1.0.2
>> 4|Active |1|Apache Felix Gogo Shell (1.0.0)|1.0.0
>> 5|Active |1|Scala Standard Library
>> (2.12.1.v20161205-104509-VFINAL-2787b47)|2.12.1.v20161205-
>> 104509-VFINAL-2787b47
>> 6|Active |1|org.scala-lang.modules.scala-java8-compat
>> (0.8.0)|0.8.0
>> 7|Active |1|com.typesafe.config (1.3.0)|1.3.0
>> 8|Active |1|akka-actor (2.4.17)|2.4.17
>> 9|Active |1|akka-osgi (2.4.17)|2.4.17
>>10|Active |1|Apache Felix Declarative Services (2.0.8)|2.0.8
>>11|Resolved   |1|osgi.akka.actorsystem.demo (0.0.0)|0.0.0
>> g!
>>
>> Can someone advice how to figure out whats going on here? The
>> error-message is not telling much.
>>
>> regards
>> Marc
>>
>> [1] http://doc.akka.io/docs/akka/2.4.17/additional/osgi.html
>> [2] https://github.com/lostiniceland/playground/tree/master/akka-osgi
>> [3] https://github.com/lostiniceland/playground/blob/master/
>> akka-osgi/osgi.akka.actorsystem.demo/bnd.bnd
>> [4] https://github.com/lostiniceland/playground/blob/master/
>> akka-osgi/osgi.akka.actorsystem.demo/launch.bndrun
>> [5] https://github.com/lostiniceland/playground/blob/master/
>> akka-osgi/cnf/central.xml
>>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.goo

Re: [akka-user] Re: Akka in OSGi

2017-03-17 Thread Marc Schlegel
We are running OSGi within our IBM CTG (Mainframe-thingy)

This is basically a proof-of-concept to use one of the reactive frameworks 
(Akka, Vertx, RxJava) within OSGi for workload-distributionand because 
I like the OSGi programming model...and versioning...and module-isolation 
:-)

I might be able to contribute some documentation later, since I've already 
removed the Activator approach with a Declarative-Service ActorSystem which 
can be configured while the container is running (one of the many goodies 
of OSGi).

regards
Marc

Am Freitag, 17. März 2017 16:05:42 UTC+1 schrieb Rob Wills:
>
> Just curious, but is there any particular reason you need to use osgi and 
> in particular osgi with Akka?   Is it mandated by your employer?  As you 
> can probably tell there doesn't seem to be much support for osgi+akka in 
> this forum.
>
> On Fri, Mar 17, 2017 at 1:23 PM, Marc Schlegel  > wrote:
>
>> The error is caused by the missing package* sun.misc*
>>
>> After adding the following line to my bndlaunch-configuration akka can 
>> start
>> -runsystempackages: sun.misc
>>
>> Background: by default, the OSGi-framework (in my case Felix) will only 
>> load public-api packages from the JRE and "sun.misc" is not part of it.
>>
>> Am Donnerstag, 16. März 2017 10:32:15 UTC+1 schrieb Marc Schlegel:
>>>
>>> Hello everyone
>>>
>>> I am trying to setup Akka within an OSGi environment and followed the 
>>> documentation 
>>> [1].
>>>
>>> Though, first I was trying to setup a ActorSystem from a 
>>> Declarative-Service component rather than an Activator, I realized I should 
>>> first start with the documented approach.
>>> Unfortunately I am running in following Exception during my 
>>> bundle-activation
>>>
>>> ! Failed to start bundle osgi.akka.actorsystem.demo-0.0.0, exception 
>>> activator error null from: akka.dispatch.AbstractNodeQueue:#181
>>>
>>> My demo 
>>>  [2] 
>>> is using a Bnd Workspace instead of Maven or SBT but the generated artifact 
>>> looks just fine. The bundles metadata is coming from a bnd-file 
>>> 
>>>  [3], 
>>> the running instance is configured in a bndrun 
>>> -file
>>>  
>>> [4]. As you migh see, those files use the Bundle-SymbolicName to reference 
>>> dependencies. This works within the scope of the Bnd Workspace, the actual 
>>> external dependencies are specified in a Maven-Repository-Resolver 
>>> 
>>>  
>>> [5]
>>>
>>> Here is a list of the bundles which are currently running in my system:
>>>
>>> START LEVEL 1
>>>ID|State  |Level|Name
>>> 0|Active |0|System Bundle (5.6.2)|5.6.2
>>> 1|Active |1|Apache Felix Configuration Admin Service 
>>> (1.8.14)|1.8.14
>>> 2|Active |1|Apache Felix Gogo Command (1.0.2)|1.0.2
>>> 3|Active |1|Apache Felix Gogo Runtime (1.0.2)|1.0.2
>>> 4|Active |1|Apache Felix Gogo Shell (1.0.0)|1.0.0
>>> 5|Active |1|Scala Standard Library 
>>> (2.12.1.v20161205-104509-VFINAL-2787b47)|2.12.1.v20161205-104509-VFINAL-2787b47
>>> 6|Active |1|org.scala-lang.modules.scala-java8-compat 
>>> (0.8.0)|0.8.0
>>> 7|Active |1|com.typesafe.config (1.3.0)|1.3.0
>>> 8|Active |1|akka-actor (2.4.17)|2.4.17
>>> 9|Active |1|akka-osgi (2.4.17)|2.4.17
>>>10|Active |1|Apache Felix Declarative Services (2.0.8)|2.0.8
>>>11|Resolved   |1|osgi.akka.actorsystem.demo (0.0.0)|0.0.0
>>> g! 
>>>
>>> Can someone advice how to figure out whats going on here? The 
>>> error-message is not telling much.
>>>
>>> regards
>>> Marc
>>>
>>> [1] http://doc.akka.io/docs/akka/2.4.17/additional/osgi.html 
>>> [2] https://github.com/lostiniceland/playground/tree/master/akka-osgi
>>> [3] 
>>> https://github.com/lostiniceland/playground/blob/master/akka-osgi/osgi.akka.actorsystem.demo/bnd.bnd
>>> [4] 
>>> https://github.com/lostiniceland/playground/blob/master/akka-osgi/osgi.akka.actorsystem.demo/launch.bndrun
>>> [5] 
>>> https://github.com/lostiniceland/playground/blob/master/akka-osgi/cnf/central.xml
>>>
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at 

[akka-user] ANNOUNCE: Akka HTTP 10.0.5 Released!

2017-03-17 Thread Konrad Malawski
Dear hakkers,

we — the Akka HTTP committers — are happy to announce Akka Http 10.0.5,
which is the fifth maintenance release of the Akka Http 10.0 series. It is
primarily aimed at stability aligning the internals with the upcoming Akka
2.5 release. These steps are also the groundwork to enable Play to make use
of Akka HTTP and the new Akka Streams materializer in the upcoming Play 2.6.

The Scala 2.11 version is already on Maven Central and the 2.12 version
should appear shortly, thanks for your patience.
List of ChangesImprovements:AKKA-HTTP-CORE

   - New docs and API for registering custom headers with JavaDSL (#761
   )
   - Ssl-config upgraded to 0.2.2, allows disabling/changing hostname
   verification (#943 )
   - Don’t depend on Akka internal APIs, become compatible with Akka 2.5 (
   #877 )
   - Make default exception handler logging more informative (#887
   )

AKKA-HTTP

   - Unmarshal.to now uses the materializer ExecutionContext if no other
   provided implicitly (#947 )

Bug fixes:AKKA-HTTP-CORE

   - Prevent longer-than-needed lingering streams by fixing
   DelayCancellationStage (#945
   )

AKKA-HTTP

   - Avoid redirect-loop when redirectToNoTrailingSlashIfPresent was used
   for root path (#878 )

Compatibility notes

This version of Akka HTTP must be used with Akka in version at-least
2.4.17, however it is also compatible with Akka 2.5, which has just
released its Release Candidate 1.

Akka HTTP 10.0.x will remain compatible with Akka 2.4.x and Akka 2.5.x
during its lifetime, yet it may require the respective latest version of
Akka itself. Since Akka guarantees binary compatibility

in
such versions, it is trivial to upgrade both projects in case you need to
do so.

We encourage you to try out Akka 2.5 release candidate with this version,
as the new completely redesigned materializer in Akka Streams can result in
various performance and memory usage improvements across the board.

Akka 10.0.x is backwards binary compatible with previous 10.0.x releases
and Akka 2.4.x. This means that the new JARs are a drop-in replacement for
the old one (but not the other way around) as long as your build does not
enable the inliner (Scala-only restriction). It should be noted that Scala
2.12.x is is not binary compatible with Scala 2.11.x.
Credits

A total 23 issues were closed since 10.0.4.

The complete list of closed issues can be found on the 10.0.5
 milestones on
github.

For this release we had the help of 13 contributors – thank you all very
much!

commits  added  removed
 18591  256 Johannes Rudolph
 12 52   56 Jonas Fonseca
  45065 Josep Prat
  2 46   28 Aurélien Thieriot
  2 146 Sergey Shishkin
  1118   27 Roman Tkalenko
  1 36   17 btomala
  1 144 Jakub Kozłowski
  1  21 Desmond Yeung
  1  11 WANG GAOXIANG (Eric)
  1  20 John Zhang
  1  11 Vova Molin
  1  11 Konrad `ktoso` Malawski

Happy hakking!

– The Akka Team

-- 
Cheers,
Konrad 'ktoso' Malawski
Akka  @ Lightbend 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka in OSGi

2017-03-17 Thread Robert Wills
I've had to use OSGI a few times in the past, and it always struck me as
being more trouble than it was worth.  Akka on its own provides far simpler
tools for workload distribution if that's what you're after.  As for
versioning I've never understood why that's such a problem -- even more so
these days as people generally try to keep their services small.



On Fri, Mar 17, 2017 at 3:32 PM, Marc Schlegel  wrote:

> We are running OSGi within our IBM CTG (Mainframe-thingy)
>
> This is basically a proof-of-concept to use one of the reactive frameworks
> (Akka, Vertx, RxJava) within OSGi for workload-distributionand because
> I like the OSGi programming model...and versioning...and module-isolation
> :-)
>
> I might be able to contribute some documentation later, since I've already
> removed the Activator approach with a Declarative-Service ActorSystem which
> can be configured while the container is running (one of the many goodies
> of OSGi).
>
> regards
> Marc
>
> Am Freitag, 17. März 2017 16:05:42 UTC+1 schrieb Rob Wills:
>>
>> Just curious, but is there any particular reason you need to use osgi and
>> in particular osgi with Akka?   Is it mandated by your employer?  As you
>> can probably tell there doesn't seem to be much support for osgi+akka in
>> this forum.
>>
>> On Fri, Mar 17, 2017 at 1:23 PM, Marc Schlegel 
>> wrote:
>>
>>> The error is caused by the missing package* sun.misc*
>>>
>>> After adding the following line to my bndlaunch-configuration akka can
>>> start
>>> -runsystempackages: sun.misc
>>>
>>> Background: by default, the OSGi-framework (in my case Felix) will only
>>> load public-api packages from the JRE and "sun.misc" is not part of it.
>>>
>>> Am Donnerstag, 16. März 2017 10:32:15 UTC+1 schrieb Marc Schlegel:

 Hello everyone

 I am trying to setup Akka within an OSGi environment and followed the
 documentation
 [1].

 Though, first I was trying to setup a ActorSystem from a
 Declarative-Service component rather than an Activator, I realized I should
 first start with the documented approach.
 Unfortunately I am running in following Exception during my
 bundle-activation

 ! Failed to start bundle osgi.akka.actorsystem.demo-0.0.0, exception
 activator error null from: akka.dispatch.AbstractNodeQueue:#181

 My demo
  [2]
 is using a Bnd Workspace instead of Maven or SBT but the generated artifact
 looks just fine. The bundles metadata is coming from a bnd-file
 
  [3],
 the running instance is configured in a bndrun
 -file
 [4]. As you migh see, those files use the Bundle-SymbolicName to reference
 dependencies. This works within the scope of the Bnd Workspace, the actual
 external dependencies are specified in a Maven-Repository-Resolver
 
 [5]

 Here is a list of the bundles which are currently running in my system:

 START LEVEL 1
ID|State  |Level|Name
 0|Active |0|System Bundle (5.6.2)|5.6.2
 1|Active |1|Apache Felix Configuration Admin Service
 (1.8.14)|1.8.14
 2|Active |1|Apache Felix Gogo Command (1.0.2)|1.0.2
 3|Active |1|Apache Felix Gogo Runtime (1.0.2)|1.0.2
 4|Active |1|Apache Felix Gogo Shell (1.0.0)|1.0.0
 5|Active |1|Scala Standard Library
 (2.12.1.v20161205-104509-VFINAL-2787b47)|2.12.1.v20161205-
 104509-VFINAL-2787b47
 6|Active |1|org.scala-lang.modules.scala-java8-compat
 (0.8.0)|0.8.0
 7|Active |1|com.typesafe.config (1.3.0)|1.3.0
 8|Active |1|akka-actor (2.4.17)|2.4.17
 9|Active |1|akka-osgi (2.4.17)|2.4.17
10|Active |1|Apache Felix Declarative Services (2.0.8)|2.0.8
11|Resolved   |1|osgi.akka.actorsystem.demo (0.0.0)|0.0.0
 g!

 Can someone advice how to figure out whats going on here? The
 error-message is not telling much.

 regards
 Marc

 [1] http://doc.akka.io/docs/akka/2.4.17/additional/osgi.html
 [2] https://github.com/lostiniceland/playground/tree/master/akka-osgi
 [3] https://github.com/lostiniceland/playground/blob/master/
 akka-osgi/osgi.akka.actorsystem.demo/bnd.bnd
 [4] https://github.com/lostiniceland/playground/blob/master/
 akka-osgi/osgi.akka.actorsystem.demo/launch.bndrun
 [5] https://github.com/lostiniceland/playground/blob/master/
 akka-osgi/cnf/central.xml

>>> --
>>> >> Read the docs: http://akka.io/docs/
>>> 

[akka-user] Stream stopped silently, using mapAsyncUnordered()

2017-03-17 Thread Guofeng Zhang
Hi,

Here is my code:

Source missedProductSource =
Source.from(missedProducts) ;
final RunnableGraph missedTo =
missedProductSource.mapAsyncUnordered(5, p->{
try {
return stopProductDisplay(p.productNo, p.id);
} catch (Throwable t ) {
Logger.error(t.getMessage(), t);
return CompletableFuture.completedFuture((Void)null);
}
}).to(Sink.ignore()) ;
missedTo.run(materializer) ;
Where stopProductDisplay's return type is CompletionStage.

The size of missedProducts is more than 30,000. but the stream only
processed 5 elements, then stop working. No error logged out.

I cannot figure out the reason.

What is wrong with the above code, or how to debug the issue?

Thanks for your help.

Guofeng

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How akka cluster become network partition

2017-03-17 Thread kant kodali
Why does one needs to take an approach of Split brain resolver when there 
is something better? The problem of network partitions is well understood 
in most distributed nosql databases. For example Cassandra continues to 
operate with the nodes that are available/reachable/up&running during 
network partitions and when the network partition is resolved (typically by 
fixing the cable or hardware) The most up to date nodes will just stream 
data to the stale ones. 

 


On Thursday, March 16, 2017 at 4:08:07 AM UTC-7, Dai Yinhua wrote:
>
> Hi team,
>
> I am aware that akka cluster may be partitioned to 2 clusters with 
> auto-downing.
> But I can't understand how does it happen?
> If node A is performing a long GC? And then node A become unreachable, 
> after a while, node A is marked as down, why the cluster is partitioned in 
> this case?
>
> Can you help to explain more clearly on that? 
>
> Thank you.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Stream stopped silently, using mapAsyncUnordered()

2017-03-17 Thread Patrik Nordwall
Could it be the null (Void)? Try some other element type there.
fre 17 mars 2017 kl. 19:00 skrev Guofeng Zhang :

> Hi,
>
> Here is my code:
>
> Source missedProductSource =
> Source.from(missedProducts) ;
> final RunnableGraph missedTo =
> missedProductSource.mapAsyncUnordered(5, p->{
> try {
> return stopProductDisplay(p.productNo, p.id);
> } catch (Throwable t ) {
> Logger.error(t.getMessage(), t);
> return CompletableFuture.completedFuture((Void)null);
> }
> }).to(Sink.ignore()) ;
> missedTo.run(materializer) ;
> Where stopProductDisplay's return type is CompletionStage.
>
> The size of missedProducts is more than 30,000. but the stream only
> processed 5 elements, then stop working. No error logged out.
>
> I cannot figure out the reason.
>
> What is wrong with the above code, or how to debug the issue?
>
> Thanks for your help.
>
> Guofeng
>
>
>
>
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: How akka cluster become network partition

2017-03-17 Thread Patrik Nordwall
You could leave it running without downing, but you must still have a
strategy for downing/removing crashed or stopped nodes. Also note that a
crash, cpu overload or a long GC are indistinguishable from a network
partition.

/Patrik
fre 17 mars 2017 kl. 19:24 skrev kant kodali :

> Why does one needs to take an approach of Split brain resolver when there
> is something better? The problem of network partitions is well understood
> in most distributed nosql databases. For example Cassandra continues to
> operate with the nodes that are available/reachable/up&running during
> network partitions and when the network partition is resolved (typically by
> fixing the cable or hardware) The most up to date nodes will just stream
> data to the stale ones.
>
>
>
>
> On Thursday, March 16, 2017 at 4:08:07 AM UTC-7, Dai Yinhua wrote:
>
> Hi team,
>
> I am aware that akka cluster may be partitioned to 2 clusters with
> auto-downing.
> But I can't understand how does it happen?
> If node A is performing a long GC? And then node A become unreachable,
> after a while, node A is marked as down, why the cluster is partitioned in
> this case?
>
> Can you help to explain more clearly on that?
>
> Thank you.
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Idiomatic way to use akka stream Source within Spark

2017-03-17 Thread Kyrylo Stokoz

Hi All,

I`m trying to figure out how one should use Sources within Spark Jobs 
keeping all benefits of Spark.
Consider following snippet:

val items: Seq[String] = Seq("a", "b", "c")
sparkSession.sparkContext.parallelize(items, 10).flatMap { item =>
  val subItems: Source[String, _] = f(item)
  // (1)
  ???
}.map { subItem =>
  f1(subItem)
}.reduce(_ + _)


There is this project https://github.com/lloydmeta/sparkka-streams which 
tries to bridge akka streams and Spark Streaming.
I also created an IteratorSinkStage which materialized into Iterator 
similar as InputStreamSinkStage which is materialized into InputStream, but 
i`m not sure if this is best solution for this problem.

What would be the best way to work with akka Source within Spark 
environment?

Regards,
Kyrylo



 


-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: How akka cluster become network partition

2017-03-17 Thread Justin du coeur
Right, but that's an intentionally duplicative environment.  In most Akka
systems, you want to have at *most* one Actor for any given entity.  In
particular, if you are using Cluster Sharding and Akka Persistence,
duplication is a recipe for database corruption.

One might well be able to build a more redundant system on top of Akka,
which would be partition-tolerant in the way you're describing.  (In
particular, I could imagine something CRDT-based that wouldn't care so
much.)  But that's higher-level than Akka itself generally focuses.

And even there, you want to have a well-defined concept of when a node has
left the cluster permanently.  (Remember that most Akka architectures are
designed to be tolerant of scaling the cluster up and down based on load,
relatively dynamically.)

On Fri, Mar 17, 2017 at 2:24 PM, kant kodali  wrote:

> Why does one needs to take an approach of Split brain resolver when there
> is something better? The problem of network partitions is well understood
> in most distributed nosql databases. For example Cassandra continues to
> operate with the nodes that are available/reachable/up&running during
> network partitions and when the network partition is resolved (typically by
> fixing the cable or hardware) The most up to date nodes will just stream
> data to the stale ones.
>
>
>
>
> On Thursday, March 16, 2017 at 4:08:07 AM UTC-7, Dai Yinhua wrote:
>>
>> Hi team,
>>
>> I am aware that akka cluster may be partitioned to 2 clusters with
>> auto-downing.
>> But I can't understand how does it happen?
>> If node A is performing a long GC? And then node A become unreachable,
>> after a while, node A is marked as down, why the cluster is partitioned in
>> this case?
>>
>> Can you help to explain more clearly on that?
>>
>> Thank you.
>>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka HTTP path matching fundamentally broken? Surely not?

2017-03-17 Thread Daniel Stoner
I think perhaps your mistaking the same mistake I did when I first picked 
up Akka HTTP coming from a background with Jersey.
What I wanted to do - was nest paths inside of each other thinking:
path("v1") {
path("orders") {whatever}
path("customers") {whatever}
}

Would be the perfect syntax for setting up the 2 paths:
/v1/orders/...
/v1/customers/...
In a really easy to comprehend and sensible manner. EG Each sub path prefix 
only really matching an individual segment like how your speaking.

In practice it appears that it works more as though each nested path does 
not operate on the remaining unmatched path - but actually the whole url. 
In a sense the above code is saying match paths that both look like: START 
- /v1/ - END AND START - /orders/ - END. Obviously an impossible situation.

So what we do is simply enumerate the paths we want to match for (rather 
than nest them) and use PathMatcher DSL to enable us to avoid any real 
overhead to the code for this. In the end it's just as readable (If not a 
little bit more so since your enumerating explicitly your matching 
possibilities rather than nesting them).
http://doc.akka.io/docs/akka-http/10.0.4/scala/http/routing-dsl/path-matchers.html#pathmatcher-dsl

What does this look like? (In Java)

public static final String PATH_ORDERS = "orders";
public static final String PATH_VERSION = "v2";
private static final PathMatcher1 PATH_PARAM_UUID = 
PathMatchers.uuidSegment();


route(
path(
 PathMatchers
 .segment(PATH_VERSION)
 .slash(PATH_ORDERS),
 () -> put(...-> createOrder())
),
path(
 PathMatchers
 .segment(PATH_VERSION)
 .slash(PATH_ORDERS)
 .slash(PATH_PARAM_UUID),
 orderId -> route(
 post(...->changeOrderStatus(orderId),
 get(...->getOrder(orderId)
 )
))



Code above representative only and in an older version of Akka HTTP but 
much the same as current. The - ...-> syntax used to highlight what methods 
like getOrder are doing (They are just returning Routes which further limit 
down by GET/POST/similar). Sure we could have made it a lot nicer by 
defining and reusing:

PathMatcher0 VERSION = PathMatchers.segment(PATH_VERSION)


One practices we found that worked - do the splitting by HTTP method 
(GET/POST/PUT) down at the lowest level. Often you have a URL like 
/v1/orders that accepts GET, PUT and POST and then you can nest all 3 
options under one path matcher.

Can you do a fully nesting style like v1/ { orders, customers } ? Probably 
using the wildcards like you suggest, but I could never get it to work like 
that (I tried a lot) and i'm not sure if its designed for that use case. 
Frankly I kind of ended up preferring the enumerated style and found it 
easier to manage than Jerseys nested api styling.

If you checkout some of the Akka seed projects using Akka HTTP they 
probably show better examples in your language of preference too :)
http://www.lightbend.com/community/core-tools/activator-and-sbt

Activator is amazing for finding feature fledged examples of how to use 
things.

On Friday, 17 March 2017 11:25:09 UTC, Alan Burlison wrote:
>
> I'm sure I must be missing something here because I can't believe path 
> matching in Akka HTTP could be broken in the way it seems to be, because 
> it would be unusable for anything other than toy applications if it was: 
>
> I'm composing route handing from a top-level handler and sub-handlers 
> like this: 
>
> pathPrefix("root") { 
>concat( 
>   pathPrefix("service1") { service1.route }, 
>   pathPrefix("service2") { service2.route } 
>) 
> } 
>
> where service1.route etc returns the sub-route for the associated 
> sub-tree. That works fine with a path of say "/root/service1", but it 
> *also* matches "/rootnotroot/service1", because pathPrefix() just 
> matches any arbitrary string prefix and not a full path segment. And if 
> I use path() instead of pathPrefix() it tries to match the entire 
> remaining path. What I'm looking for is something along the lines of 
> segment() where that fully matches just the next path segment and leaves 
> the remaining path to be matched by inner routes, but there doesn't seem 
> to be such a thing. 
>
> What am I missing? 
>
> Thanks, 
>
> -- 
> Alan Burlison 
> -- 
>

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited and Fabled 
is a trading name of Marie Claire Beauty Limited, both members of the Ocado 
Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Compan

Re: [akka-user] Re: Akka HTTP path matching fundamentally broken? Surely not?

2017-03-17 Thread Alan Burlison

On 17/03/2017 21:25, Daniel Stoner wrote:


What I wanted to do - was nest paths inside of each other thinking:
path("v1") {
path("orders") {whatever}
path("customers") {whatever}
}


Yes, exactly so.


In practice it appears that it works more as though each nested path does
not operate on the remaining unmatched path - but actually the whole url.
In a sense the above code is saying match paths that both look like: START
- /v1/ - END AND START - /orders/ - END. Obviously an impossible situation.


I believe that's why you need to use "~" or concat, so the alternatives 
are tried in order.



So what we do is simply enumerate the paths we want to match for (rather
than nest them) and use PathMatcher DSL to enable us to avoid any real
overhead to the code for this. In the end it's just as readable (If not a
little bit more so since your enumerating explicitly your matching
possibilities rather than nesting them).


The problem is I have hundreds of paths to match, listing each 
individual path in full in a linear fashion is just unworkable, I need 
to use a tree.



One practices we found that worked - do the splitting by HTTP method
(GET/POST/PUT) down at the lowest level. Often you have a URL like
/v1/orders that accepts GET, PUT and POST and then you can nest all 3
options under one path matcher.


Yes, that's what I've done.

I really believe the current behaviour is the wrong choice, although 
I've found hints in the documentation that it is deliberate - I have no 
idea why it was considered to be a good design, if indeed it was.


I'm trying to figure out how to write my own implementation of 
Segment("foo") to allow complete matching of a path segment, but it's 
not easy as the DSL implementation is pretty "dense" code.


--
Alan Burlison
--

--

 Read the docs: http://akka.io/docs/
 Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka User List" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka HTTP path matching fundamentally broken? Surely not?

2017-03-17 Thread Daniel Stoner
I guess you have to remember that Akka-HTTP originates from Spray - and so 
those choices were likely already made. (I'm sure there is a fully 
plausible performance threading reason that is beyond me too hehe).

Well I know how i'd do nesting in Java at least if its any help.

Implementing a custom directive is easy! And the requestContext which is 
passed into every layer of your route (presumably its an implicit value in 
scala?) can be extended and passed around with the additional info you 
might need such as a slowly building list of string segment paths. You can 
then signal that your at a leaf node of your tree by calling end or the 
likes - and return a single Path with the full linear evaluation of that 
point in the tree recursion.

So how I'd do it is implement my own directive called Segment maybe a 
little like this:

public abstract class SegmentDirectives extends AllDirectives


public Route segment(String segment, Route inner) {
return this.mapRequestContext((innerCtx) -> {
//You could put some logic here and then do something - you can control the 
RequestContext which gets passed to the child
//Hence you can control whether the lower level stuff get invokes or not 
based on what the full path is
 return new RequestContext(new 
MySuperNewRequestContextWhichObviouslyImplementsRequestContextInterface(
innerCtx.delegate(), segment));
}
}



}



Your new RequestContext impl could then just keep a List - that it made 
available as a PathMatcher when you chose to 'finish' your tree like:

public Route endSegment(Route innerRoute){
return this.mapRequestContext(innerCtx-> {
if(innerCtx instanceof MySuperNewRequest.){
return path(((MySuperNewRequest)innerCtx).getPathMatcherForBuiltPaths, 
innerRoute);
}
})
}

Then simply do a route a little like:

segment("v1", { segment("orders", {end({whatever})}), segment("customers", 
{end({whatever})}) }


Well obviously the 'this is how id do it' bit is a lie - I wouldn't do it. 
I'd probably ask myself why I had hundreds of apis and wanted to list those 
all in 1 mega file using a nested tree that either presumably is going to 
flip flop all over the classes in the project, or move further and further 
to the right of the screen as it becomes deeper.

I know the reality of software development is generally you get stuck with 
tough situations like that from historical decisions so fair enough if its 
really required. At the end of the day though - even writing 300 linear 
apis reusing PathMatcher variables preconfigured to do most of common 
situations can end up the same kind of amount of code as the nest 
equivalent. I know it doesn't feel like it initially but keep faith! :)

For context our largest service has around 10 classes which implement Route 
- each class probably has about 6 apis in it, and all of these are pulled 
in using Guice multi-binding - meaning I end up with 10 beautifully crafted 
readable classes called things like V1OrdersRoute, V2OrdersRoute, 
V1CustomersRoute and then all have the same OAUTH2 authentication 
protections and error logging applied when the multi-binding is injected 
and connected up to the Route flow on the HTTP server ensuring no-one goes 
without good security protocols or basic access logging. In my HTTPServer I 
simply put @Inject private MultiBinder allMyRoutes and attach it 
into a tree with the above stated requirements.

On Friday, 17 March 2017 11:25:09 UTC, Alan Burlison wrote:
>
> I'm sure I must be missing something here because I can't believe path 
> matching in Akka HTTP could be broken in the way it seems to be, because 
> it would be unusable for anything other than toy applications if it was: 
>
> I'm composing route handing from a top-level handler and sub-handlers 
> like this: 
>
> pathPrefix("root") { 
>concat( 
>   pathPrefix("service1") { service1.route }, 
>   pathPrefix("service2") { service2.route } 
>) 
> } 
>
> where service1.route etc returns the sub-route for the associated 
> sub-tree. That works fine with a path of say "/root/service1", but it 
> *also* matches "/rootnotroot/service1", because pathPrefix() just 
> matches any arbitrary string prefix and not a full path segment. And if 
> I use path() instead of pathPrefix() it tries to match the entire 
> remaining path. What I'm looking for is something along the lines of 
> segment() where that fully matches just the next path segment and leaves 
> the remaining path to be matched by inner routes, but there doesn't seem 
> to be such a thing. 
>
> What am I missing? 
>
> Thanks, 
>
> -- 
> Alan Burlison 
> -- 
>

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to sc

Re: [akka-user] Why akka choose protobuf internally

2017-03-17 Thread Dai Yinhua

>
> Hi Justin,
>

 Thank you for your information and suggestion.
I have read your blog but I can't see the benefit of using scala to 
describe the schema over using an external description file(like .proto).
But I look forward your unfinished part.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.