Re: [infinispan-dev] Maintenance of OpenShift templates

2018-03-07 Thread Galder Zamarreño
Sebastian Laskawiec  writes:

> On Tue, Mar 6, 2018 at 5:11 PM Galder Zamarreño 
> wrote:
>
> Sebastian Laskawiec  writes:
>
> > Hey Galder,
> >
> > Comments inlined.
> >
> > Thanks,
> > Seb
> >
> > On Fri, Mar 2, 2018 at 11:37 AM Galder Zamarreño
> 
> > wrote:
> >
> > Hi,
> >
> > Looking at [1] and I'm wondering why the templates have to
> > maintain a
> > different XML file for OpenShift?
> >
> > We already ship an XML in the server called `cloud.xml`, that
> > should
> > just work. Having a separate XML file in the templates means
> we're
> > duplicating the maintainance of XML files.
> >
> > Also, users can now create caches programmatically. This is by
> far
> > the
> > most common tweak that had to be done to the config. So, I see
> the
> > urgency to change XML files less immediate.
> >
> > So just to give you guys a bit more context - the templates were
> > created pretty long time ago when we didn't have admin
> capabilities in
> > Hot Rod and REST. The main argument for putting the whole
> > configuration into a ConfigMap was to make configuration changes
> > easier for the users. With ConfigMap approach they can log into
> > OpenShift UI, go to Resources -> ConfigMaps and edit everything
> using
> > UI. That's super convenient for hacking in my opinion. Of
> course, you
> > don't need to do that at all if you don't want. You can just
> spin up a
> > new Infinispan cluster using `oc new-app`.
>
> I agree with the usability of the ConfigMap. However, the
> duplication is
> very annoying. Would it be possible for the ConfigMap to be
> created on
> the fly out of the cloud.xml that's shipped by Infinispan Server?
> That
> way we'd still have a ConfigMap without having to duplicate XML.
>
> Probably not. This would require special permissions to call
> Kubernetes API from the Pod. In other words, I can't think about any
> other way that would work in OpenShift Online for the instance.
>
> > There are at least two other ways for changing the configuration
> that
> > I can think of. The first one is S2I [1][2] (long story short,
> you
> > need to put your configuration into a git repository and tell
> > OpenShift to build an image based on it). Even though it may
> seem very
> > convenient, it's OpenShift only solution (and there are no easy
> (out
> > of the box) options to get this running on raw Kubernetes). I'm
> not
> > judging whether it's good or bad here, just telling you how it
> works.
> > The other option would be to tell the users to do exactly the
> same
> > things we do in our templates themselves. In other words we
> would
> > remove configuration from the templates and provide a manual for
> the
> > users how to deal with configuration. I believe this is exactly
> what
> > Galder is suggesting, right?
>
> What we do in the templates right now to show users how to tweak
> their
> config is in convoluted.
>
> Ideally, adding their own custom configuration should be just a
> matter
> of:
>
> 1. Creating a ConfigMap yaml pointing to an XML.
> 2. Ask users to put their XML in a separate file pointed by the
> ConfigMap.
> 3. Deploy ConfigMap and XML.
> 4. Trigger a new Infinispan redeployment.
>
> That would probably need to be a new deployment. Most of the
> StatefulSet spec is immutable.
>
> Not sure how doable this is with the current template approach, or
> we
> could explain how to do this for an already up and running
> application
> that has Infinispan created out of the default template?
>
> I've been thinking about this for a while and this is what I think we
> should do:
>
> 1 Wait a couple of weeks and review the community image created by the
>   CE Team. See if this is a good fit for us. If it is, I would focus
>   on adopting this approach and adjust our templates to handle it.
> 2 Whether or not we adopt the CE community work, we could put all
>   necessary stuff into cloud.xml or services.xml configuration. We
>   could do one step forward and merge them together. 
> 3 Make sure that dynamically created caches are persisted (this is
>   super important!!)
> 4 Once #3 

[infinispan-dev] Wildfly Clustering on OpenShift

2018-03-06 Thread Galder Zamarreño
Are there any configurations out of box inside Wildfly for clustering
on OpenShift? It'd need a transport that uses kube ping?

"Stack Exchange"  writes:

> Stack Exchange
>
> Stack Exchange
>
> * The following item was added to your Stack Exchange
>  "infinispan-user" feed.
> Stack Overflow Infinispan replicated cache not replicating
>  objects for read
>
>  We are trying to install a replicated cache
>  across two infinispan nodes running on
>  Wildfly 11 inside of Openshift. When we write
>  an object on one node it doesn't show up on
>  the other node for reading. ...
>
>  tagged: java, wildfly, Mar 1 at 9:13
>  infinispan, infinispan-9
>  Unsubscribe from this filter or change your
>  email preferences by visiting your filter
>  subscriptions page on stackexchange.com.
>
> Questions? Comments? Let us know on our feedback site.
>
> Stack Exchange Inc. 110 William Street, 28th floor, NY NY 10038 <3
> Stack Exchange
> *
> Stack Overflow
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Maintenance of OpenShift templates

2018-03-06 Thread Galder Zamarreño
Sebastian Laskawiec  writes:

> Hey Galder,
>
> Comments inlined.
>
> Thanks,
> Seb
>
> On Fri, Mar 2, 2018 at 11:37 AM Galder Zamarreño 
> wrote:
>
> Hi,
>
> Looking at [1] and I'm wondering why the templates have to
> maintain a
> different XML file for OpenShift?
>
> We already ship an XML in the server called `cloud.xml`, that
> should
> just work. Having a separate XML file in the templates means we're
> duplicating the maintainance of XML files.
>
> Also, users can now create caches programmatically. This is by far
> the
> most common tweak that had to be done to the config. So, I see the
> urgency to change XML files less immediate.
>
> So just to give you guys a bit more context - the templates were
> created pretty long time ago when we didn't have admin capabilities in
> Hot Rod and REST. The main argument for putting the whole
> configuration into a ConfigMap was to make configuration changes
> easier for the users. With ConfigMap approach they can log into
> OpenShift UI, go to Resources -> ConfigMaps and edit everything using
> UI. That's super convenient for hacking in my opinion. Of course, you
> don't need to do that at all if you don't want. You can just spin up a
> new Infinispan cluster using `oc new-app`.

I agree with the usability of the ConfigMap. However, the duplication is
very annoying. Would it be possible for the ConfigMap to be created on
the fly out of the cloud.xml that's shipped by Infinispan Server? That
way we'd still have a ConfigMap without having to duplicate XML.

> There are at least two other ways for changing the configuration that
> I can think of. The first one is S2I [1][2] (long story short, you
> need to put your configuration into a git repository and tell
> OpenShift to build an image based on it). Even though it may seem very
> convenient, it's OpenShift only solution (and there are no easy (out
> of the box) options to get this running on raw Kubernetes). I'm not
> judging whether it's good or bad here, just telling you how it works.
> The other option would be to tell the users to do exactly the same
> things we do in our templates themselves. In other words we would
> remove configuration from the templates and provide a manual for the
> users how to deal with configuration. I believe this is exactly what
> Galder is suggesting, right?

What we do in the templates right now to show users how to tweak their
config is in convoluted.

Ideally, adding their own custom configuration should be just a matter
of:

1. Creating a ConfigMap yaml pointing to an XML.
2. Ask users to put their XML in a separate file pointed by the ConfigMap.
3. Deploy ConfigMap and XML.
4. Trigger a new Infinispan redeployment.

Not sure how doable this is with the current template approach, or we
could explain how to do this for an already up and running application
that has Infinispan created out of the default template?

>
> Recently we implemented admin commands in the Hot Rod. Assuming that
> caches created this way are not wiped out during restart (that needs
> to be checked), we could remove the configuration from the templates
> and tell the users to create their caches over Hot Rod and REST.
> However we still need to have a back door for modifying configuration
> manually since there are some changes that can not be done via admin
> API.
>
> [1] https://github.com/openshift/source-to-image
> [2]
> https://github.com/jboss-dockerfiles/infinispan/blob/master/server/.s2i/bin/assemble
>
>
> Sure, there will always be people who modify/tweak things and
> that's
> fine. We should however show the people how to do that in a way
> that
> doesn't require us to duplicate our maintanence work.
>
> If we think about further maintenance, I believe we should take more
> things into consideration. During the last planning meeting Tristan
> mentioned about bringing the project and the product closer together.
> On the Cloud Enablement side of things there are ongoing experiments
> to get a community images out.
>
> If we decided to take this direction (the CE way), our templates would
> need to be deprecated or will change drastically. The image will react
> on different set of variables and configuration options.
>
> Also, if we want to show the users how to use a custom XML file, I
> don't
> think we should show them how to embedd it in the template as JSON
> [2]. It's quite a pain. Instead, the XML should be kept as a
> separate
> file and the JSON file reference it.
>
> I'm still struggling to understand why this is a pain. Could you
> please explain it a bit m

[infinispan-dev] Maintenance of OpenShift templates

2018-03-02 Thread Galder Zamarreño
Hi,

Looking at [1] and I'm wondering why the templates have to maintain a
different XML file for OpenShift?

We already ship an XML in the server called `cloud.xml`, that should
just work. Having a separate XML file in the templates means we're
duplicating the maintainance of XML files.

Also, users can now create caches programmatically. This is by far the
most common tweak that had to be done to the config. So, I see the
urgency to change XML files less immediate.

Sure, there will always be people who modify/tweak things and that's
fine. We should however show the people how to do that in a way that
doesn't require us to duplicate our maintanence work.

Also, if we want to show the users how to use a custom XML file, I don't
think we should show them how to embedd it in the template as JSON
[2]. It's quite a pain. Instead, the XML should be kept as a separate
file and the JSON file reference it.

Cheers,

[1]
https://github.com/infinispan/infinispan-openshift-templates/pull/16/files
[2] 
https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] spare cycles

2018-02-27 Thread Galder Zamarreño
Hi Ion,

Great to hear that you have time to contribute to stuff.

Any particular interests? Hackathon ideas are a good place to start:

https://issues.jboss.org/browse/ISPN-2234?filter=12322175

Having a HotRod URL format would be a good one :)

Cheers

Ion Savin  writes:

> Hi all,
>
> I have some spare cycles over the course of the year which I'm going to
> use to contribute to open source projects.
>
> If you can think of anything specific that you could use some help with
> please let me know.
>
> Thanks,
> Ion Savin
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Hot Rod secured by default

2018-02-27 Thread Galder Zamarreño
Tristan Tarrant  writes:

> Sorry for reviving this thread, but I want to make sure we all agree on 
> the following points.
>
> DEFAULT CONFIGURATIONS
> - The endpoints MUST be secure by default (authentication MUST be 
> enabled and required) in all of the supplied default configurations.
> - We can ship non-secure configurations, but these need to be clearly 
> marked as such in the configuration filename (e.g. 
> standalone-unsecured.xml).
> - Memcached MUST NOT be enabled by default as we do not implement the 
> binary protocol which is the only one that can do authn/encryption
> - The default configurations (standalone.xml, domain.xml, cloud.xml) 
> MUST enable only non-plaintext mechs (e.g. digest et al)

+1

>
> SERVER CHANGES
> - Warn if a plain text mech is enabled on an unencrypted endpoint
>
> API
> - We MUST NOT add a "trust all certs" switch to the client config as 
> that would thwart the whole purpose of encryption.
>
> OPENSHIFT
> - In the context of OpenShift, all pods MUST trust the master CA. This 
> means that the CA must be injected into the trusted CAs for the pods AND 
> into the JDK cacerts file. This MUST be done by the OpenShift JDK image 
> automatically. (Debian does this on startup: [1])
>
> Tristan
>
> [1] 
> https://git.mikael.io/mikaelhg/ca-certificates-java/blob/debian/20170531/src/main/java/org/debian/security/UpdateCertificates.java
>
> On 9/14/17 5:45 PM, Galder Zamarreño wrote:
>> Gustavo's reply was the agreement reached. Secured by default and an
>> easy way to use it unsecured is the best middle ground IMO.
>> 
>> So, we've done the securing part partially, which needs to be
>> completed by [2] (currently assigned to Tristan).
>> 
>> More importantly, we also need to complete [3] so that we have ship
>> the unsecured configuration, and then show people how to use that
>> (docus, examples...etc).
>> 
>> If you want to help, taking ownership of [3] would be best.
>> 
>> Cheers,
>> 
>> [2] https://issues.jboss.org/browse/ISPN-7815
>> [3] https://issues.jboss.org/browse/ISPN-7818
>> 
>>> On 6 Sep 2017, at 11:03, Katia Aresti  wrote:
>>>
>>> @Emmanuel, sure it't not a big deal, but starting fast and smooth
>>> without any trouble helps adoption. Concerning the ticket, there is
>>> already one that was acted. I can work on that, even if is assigned
>>> to Galder now.
>>>
>>> @Gustavo
>>> Yes, as I read - better - now on the security part, it is said for
>>> the console that we need those. My head skipped that paragraph or I
>>> read that badly, and I was wondering if it was more something
>>> related to "roles" rather than a user. My bad, because I read too
>>> fast sometimes and skip things ! Maybe the paragraph of the
>>> security in the console should be moved down to the console part,
>>> which is small to read now ?  When I read there "see the security
>>> part bellow" I was like : ok, the security is done !! :)
>>>
>>> Thank you for your replies !
>>>
>>> Katia
>>>
>>>
>>> On Wed, Sep 6, 2017 at 10:52 AM, Gustavo Fernandes  
>>> wrote:
>>> Comments inlined
>>>
>>> On Tue, Sep 5, 2017 at 5:03 PM, Katia Aresti  wrote:
>>> And then I want to go to the console, requires me to put again the
>>> user/password. And it does not work. And I don't see how to disable
>>> security. And I don't know what to do. And I'm like : why do I need
>>> security at all here ?
>>>
>>>
>>> The console credentials are specified with MGMT_USER/MGMT_PASS env
>>> variables, did you try those? It will not work for
>>> APP_USER/APP_PASS.
>>>
>>>   
>>> I wonder if you want to reconsider the "secured by default" point
>>> after my experience.
>>>
>>>
>>> The outcome of the discussion is that the clustered.xml will be
>>> secured by default, but you should be able to launch a container
>>> without any security by simply passing an alternate xml in the
>>> startup, and we'll ship this XML with the server.
>>>
>>>
>>> Gustavo
>>>   
>>>
>>> My 2 cents,
>>>
>>> Katia
>>>
>>> On Tue, May 9, 2017 at 2:24 PM, Galder Zamarreño  wrote:
>>> Hi all,
>>>
>>> Tristan and I had chat yesterday and I've distilled the contents of
>>> the discussion and the feedback here into a JIRA [1]. The JIRA
&

[infinispan-dev] Best practices for Netty version clashes

2018-02-15 Thread Galder Zamarreño
Hi,

I was playing around with GRPC for a talk next month and made a mistake
that threw me a little bit and wanted to share it here to see if we can
do something about it.

My demo uses GRPC and Infinispan embedded cache (9.2.0.CR1), so I added
my GRPC dependencies and Infinispan bom dependency [1].

This combo resulted in breaking my GRPC demos.

The bom imports Netty 4.1.9.Final while GRPC requires 4.1.17.Final. The
dependency tree showed GRPC using 4.1.9.Final which lead to the
failure. This failure does not seem present in 4.1.17.Final.

Should we have an embedded bom where no client libraries are depended
upon? This would work for my particular use case...

However, someone might develop a GRPC server (which I *think* it still
requires netty) and they could then use Infinispan remote client to
bridge over to Infinispan sever. For example: this could be way to move
clients over a new client while other clients use an older protocol.

How should a user solve this clash? I can only see exclusions and
depending on latest Netty version as solution. Any other solutions
though?

Cheers,

[1] https://gist.github.com/galderz/300cc2708eab76b9861985c216b90136
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Weekly Infinispan IRC logs 2018-01-29

2018-01-29 Thread Galder Zamarreño
Tristan Tarrant  writes:

Hi all,

Here's my update which I was unable to provide yesterday:

* Mostly worked on JFokus related presentations, both deep dive and own
  presentation. This includes some slides and a lot of live coding,
* Btw, OpenShift 3.7 and Fabric8 Maven plugin are not playing
  along with redeployments, so having to workaround that. For my
  presentation this means switching to binary builds and for the deep
  dive it means switching to OpenShift 3.6. More info in
  [1].
* I also worked on adding Hibernate tutorials to website, a PR is
  waiting to be reviewed/integrated [2]. After that's integrated we should
  republish the website.

Cheers,

[1] https://github.com/fabric8io/fabric8-maven-plugin/issues/1130
[2] https://github.com/infinispan/infinispan.github.io/pull/53


> Hi all,
>
> the weekly Infinispan logs are here:
>
> http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-01-29-15.01.log.html
>
> Tristan
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.2.0.CR1 released

2018-01-22 Thread Galder Zamarreño

Hi,

Last Friday we released Infinispan 9.2.0.CR1.

You can find out all about it here:
http://blog.infinispan.org/2018/01/first-candidate-release-for-infinispan.html

Cheers,
Galder
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Data format not supported with REST

2017-12-07 Thread Galder Zamarreño

Hey Tristan,

I was trying your REST curl example in [1], but I'm getting "Data format
not supported" error when doing a get. See output in [2].

I'm using 9.1.0, maybe something has changed?

Looking at our docu, we don't seem to have any curl REST examples.

Cheers,
Galder

[1] https://developer.jboss.org/thread/274501
[2] https://gist.github.com/galderz/21997d4b673908943a11a38c51d4cc9d
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore

2017-12-07 Thread Galder Zamarreño
Tristan Tarrant  writes:

Thanks everyone, I've created a JIRA to track this:
https://issues.jboss.org/browse/ISPN-8595

> To add to Adrian's history lesson:
>
> ClusterRegistry (a single, replicated, non-persistent, scoped cache) was 
> replaced with the InternalCacheRegistry which provides a common way for 
> subsystems to register internal caches with the "traits" they want but 
> configured to take into account some global settings. This means setting 
> up proper security roles, persistent paths, etc.
>
> We do however have a proliferation of caches and in my ISPN-7776 PR I've 
> reintroduced a scoped config/state cache which can be shared by 
> interested parties.
>
> I do like the org.infinispan prefix for internal caches (and I've 
> amended my PR to use that). I'm not that concerned about the additional 
> payload, since most of the internal caches we have at the moment change 
> infrequently (schema, script, topology, etc), but we should probably 
> come up with a proper way to identify caches with a common short ID.
>
> Tristan
>
> On 11/6/17 10:46 AM, Adrian Nistor wrote:
>> Different internal caches have different needs regarding consistency,
>> tx, persistence, etc...
>> The first incarnation of ClusterRegistry was using a single cache and
>> was implemented exactly as you suggested, but had major shortcomings
>> satisfying the needs of several unrelated users, so we decided to split.
>> 
>> On 11/03/2017 10:42 AM, Radim Vansa wrote:
>>> Because you would have to duplicate entire Map on each update, unless
>>> you used not-100%-so-far functional commands. We've used the ScopedKey
>>> that would make this Cache, Object>. This
>>> approach was abandoned with ISPN-5932 [1], Adrian and Tristan can
>>> elaborate why.
>>>
>>> Radim
>>>
>>> [1] https://issues.jboss.org/browse/ISPN-5932
>>>
>>> On 11/03/2017 09:05 AM, Sebastian Laskawiec wrote:
>>>> I'm pretty sure it's a silly question, but I need to ask it :)
>>>>
>>>> Why can't we store all our internal information in a single,
>>>> replicated cache (of a type ). PURPOSE
>>>> could be an enum or a string identifying whether it's scripting cache,
>>>> transaction cache or anything else. The value (Map)
>>>> would store whatever you need.
>>>>
>>>> On Fri, Nov 3, 2017 at 2:24 AM Sanne Grinovero >>> <mailto:sa...@infinispan.org>> wrote:
>>>>
>>>>   On 2 November 2017 at 22:20, Adrian Nistor >>>   <mailto:anis...@redhat.com>> wrote:
>>>>   > I like this proposal.
>>>>
>>>>   +1
>>>>
>>>>   > On 11/02/2017 03:18 PM, Galder Zamarreño wrote:
>>>>   >> Hi all,
>>>>   >>
>>>>   >> I'm currently going through the JCache 1.1 proposed changes,
>>>>   and one that made me think is [1]. In particular:
>>>>   >>
>>>>   >>> Caches do not use forward slashes (/) or colons (:) as part of
>>>>   their names. Additionally it is
>>>>   >>> recommended that cache names starting with java. or
>>>>   javax.should not be used.
>>>>   >> I'm wondering whether in the future we should move away from
>>>>   the triple underscore trick we use for internal cache names, and
>>>>   instead just prepend them with `org.infinispan`, which is our
>>>>   group id. I think it'd be cleaner.
>>>>   >>
>>>>   >> Thoughts?
>>>>   >>
>>>>   >> [1] https://github.com/jsr107/jsr107spec/issues/350
>>>>   >> --
>>>>   >> Galder Zamarreño
>>>>   >> Infinispan, Red Hat
>>>>   >>
>>>>   >>
>>>>   >> ___
>>>>   >> infinispan-dev mailing list
>>>>   >> infinispan-dev@lists.jboss.org
>>>>   <mailto:infinispan-dev@lists.jboss.org>
>>>>   >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>   >
>>>>   >
>>>>   > ___
>>>>   > infinispan-dev mailing list
>>>>   > infinispan-dev@lists.jboss.org
>>>>   <mailto:infinispan-dev@lists.jboss.org>
>>>>   > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>>   ___
>>>>   infinispan-dev mailing list
>>>>   infinispan-dev@lists.jboss.org 
>>>> <mailto:infinispan-dev@lists.jboss.org>
>>>>   https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>>
>>>>
>>>> ___
>>>> infinispan-dev mailing list
>>>> infinispan-dev@lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] infinispan-bom needs some love

2017-11-22 Thread Galder Zamarreño
Hi all,

Re: https://issues.jboss.org/browse/ISPN-8552
Re: https://issues.jboss.org/browse/ISPN-8408

Just fell of my chair with ^

Did I somehow miss a discussion on ISPN-8408? Anything that changes 
infinispan-bom needs to be discussed in this list :|

Can someone ellaborate what problem ISPN-8408 is trying to fix in 
infinispan-bom exactly? I have personally not heard anyone complaining about it.

From my POV, the easiest way to consume Infinispan is leaving the 
infinispan-bom as it was. So, the vert.x way.

If we want a different "bom" that doesn't contain Infinispan modules, maybe we 
can add it separately and don't break existing examples/apps... but it really 
needs solves a problem :|

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] The future of Infinispan Docker image

2017-11-15 Thread Galder Zamarreño
gt; necessary) credentials. Our build pipeline uses Docker HUB integration hooks, 
> so whenever we push a new commit (or a tag) our images are being rebuilt. 
> This is very simple to understand and very powerful setup.
> 
> However we are thinking about bringing product and project images closer 
> together and possibly reusing some bits (a common example might be Jolokia - 
> those bits could be easily reused without touching core server distribution). 
> This however requires converting our image to a framework called Concreate 
> [2]. Concreate divides setup scripts into modules which are later on 
> assembled into a single Dockerfile and built. Modules can also be pulled from 
> other public git repository and I consider this as the most powerful option. 
> It is also worth to mention, that Concreate is based on YAML file - here's an 
> example of JDG image [3].
> 
> As you can see, this can be quite a change so I would like to reach out for 
> some opinions. The biggest issue I can see is that we will lose our Docker 
> HUB build pipeline and we will need to build and push images on our CI (which 
> already does this locally for Online Services). 
> 
> WDYT?
> 
> Thanks,
> Sebastian
> 
> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server
> [2] http://concreate.readthedocs.io/en/latest/
> [3] 
> https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Fwd: [infinispan/infinispan] ISPN-8113 Querying via Rest endpoint (#5557)

2017-11-10 Thread Galder Zamarreño
This is HUGE!! Kudos to Gustavo for the hard work you've done to get this in!!

> Begin forwarded message:
> 
> From: Adrian Nistor 
> Subject: Re: [infinispan/infinispan] ISPN-8113 Querying via Rest endpoint 
> (#5557)
> Date: 7 November 2017 at 09:57:08 CET
> To: infinispan/infinispan 
> Cc: Subscribed 
> Reply-To: infinispan/infinispan 
> 
> 
> Integrated. Thanks @gustavonalle <https://github.com/gustavonalle> !
> 
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub 
> <https://github.com/infinispan/infinispan/pull/5557#issuecomment-342416445>, 
> or mute the thread 
> <https://github.com/notifications/unsubscribe-auth/AADECxQMhJs3TXj4Jp7OGlEPqTeiD3Inks5s0BtjgaJpZM4QNF4a>.
> 

--
Galder Zamarreño
Infinispan, Red Hat

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Counters and their configurations in Infinispan server (DMR)

2017-11-03 Thread Galder Zamarreño
At first glance, I'd agree with Pedro.On 2 Nov 2017, at 16:07, Pedro Ruivo  wrote:Hi,IMO, I would separate the concept of counter and configuration.Even if an user doesn't create many counters, I think most of them will share the same configuration. As a bad example, if you want to counter oranges and apples, you're going to use the same configuration... probably :)In addition, it is symmetric to the cache DMR tree. This would reduce the learning curve if the user is already used to cli (i.e create caches).Cheers,PedroOn 02-11-2017 12:33, Vladimir Blagojevic wrote:Hey guys,How do you anticipate users are going to deal with counters? Are theygoing to be creating a lot of them in their applications, say dozens,hundreds, thousands?I am asking because I have a dilemma about their representation in DMRand therefore in the admin console and potentially wider. The dilemma isrelated to splitting the concepts and the mapping between counterconfiguration and counter instances. On one end of the possible spectrumuse, if users are going to have many counters that have the sameconfiguration then it makes sense to delineate the DMR concept of thecounter configuration and its counter instance just like we do forcaches and cache configuration templates. We deal with cacheconfigurations as templates; one could create hundreds of caches fromthe same template. Similarly, we can do with counters. On the other endif users are going to create very few counters then it likely does notmake much sense to separate counter configurations from its instance,they would have one to one mapping. For each new counter, users wouldjust enter counter configuration and launch an instance of acorresponding counter.The first approach saves resources and makes large counterinstantiations easier while the second approach is easier to understandconceptually but is inefficient if we are going to have many counterinstance.Thoughts?Vladimir___infinispan-dev mailing listinfinispan-dev@lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev___infinispan-dev mailing listinfinispan-dev@lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev--Galder ZamarreñoInfinispan, Red Hat___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore

2017-11-02 Thread Galder Zamarreño
Hi all,

I'm currently going through the JCache 1.1 proposed changes, and one that made 
me think is [1]. In particular:

> Caches do not use forward slashes (/) or colons (:) as part of their names. 
> Additionally it is
> recommended that cache names starting with java. or javax.should not be used.

I'm wondering whether in the future we should move away from the triple 
underscore trick we use for internal cache names, and instead just prepend them 
with `org.infinispan`, which is our group id. I think it'd be cleaner.

Thoughts?

[1] https://github.com/jsr107/jsr107spec/issues/350
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Initial Infinispan driver for JNoSQL

2017-10-30 Thread Galder Zamarreño
Hey Tristan,

Thanks for submitting that.

Why did you decide to mix embedded and remote in same project?

Cheers,

> On 19 Oct 2017, at 16:05, Tristan Tarrant  wrote:
> 
> Hi all,
> 
> I have just submitted a pull request for an initial driver for JNoSQL
> 
> https://github.com/eclipse/jnosql-diana-driver/pull/49
> 
> Tristan
> -- 
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Protostream marshaller support for optional fields

2017-10-27 Thread Galder Zamarreño
Ok, thanks for the clarification.

> On 24 Oct 2017, at 13:43, Adrian Nistor  wrote:
> 
> Hi Galder,
> 
> There's nothing special about optional fields. It is the required fields that 
> are special: they do not allow null (in the read/write methods).
> 
> If a field is optional in your protobuf schema you can write a null value and 
> protostream will happily interpret that as a missing value and will not write 
> any tag in the output protobuf stream.
> Reading a missing value will return a null. The required/optional status of a 
> field is checked against the schema at runtime, so you cannot cheat :).
> 
> None of the above holds true for a required field. Writes will only accept 
> non-nulls (an exception is thrown immediately). Reads will always return a 
> non-null value, so you can safely use a primitive-type-returning read method. 
> If the input stream does not contain a required field (probably because the 
> encoder that produced it was violating the schema) an exception is thrown.
> 
> Adrian
> 
> On 10/23/2017 06:51 PM, Galder Zamarreño wrote:
>> Hey Adrian,
>> 
>> Quick q: how is a protostream marshaller supposed to deal with optional 
>> fields?
>> 
>> I don't see any writer methods that deal with those... is it up to the user 
>> to put something on the wire to decide at read time whether the optional 
>> field follows or not?
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
> 

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Protostream marshaller support for optional fields

2017-10-23 Thread Galder Zamarreño
Hey Adrian,

Quick q: how is a protostream marshaller supposed to deal with optional fields?

I don't see any writer methods that deal with those... is it up to the user to 
put something on the wire to decide at read time whether the optional field 
follows or not?

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Code examples in multiple languages

2017-09-25 Thread Galder Zamarreño
I asked Dan Allen et al on twitter [2].

Spring has developed a similar plugin [3] and it appears to be included in [4].

Cheers,

[2] https://twitter.com/galderz/status/910848538720038913
[3] 
https://docs.spring.io/spring-restdocs/docs/current/reference/html5/#getting-started-build-configuration
[4] https://github.com/spring-io/spring-asciidoctor-extensions

> On 20 Sep 2017, at 21:17, Tristan Tarrant  wrote:
> 
> One thing that I wish we had is the ability, when possible, to give code 
> examples for our API in all of our implementations (embedded, hotrod 
> java, c++, c#, node.js and REST).
> 
> Currently each one handles documentation differently and we are not very 
> consistent with structure, content and examples.
> 
> I've been looking at Slate [1] which uses Markdown and is quite nice, 
> but has the big disadvantage that it would create something which is 
> separate from our current documentation...
> 
> An alternative approach would be to implement an asciidoctor plugin 
> which provides some kind of tabbed code block.
> 
> Any other ideas ?
> 
> 
> Tristan
> 
> [1] https://lord.github.io/slate/
> -- 
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Unable to cluster Infinispan ephemeral template instances

2017-09-25 Thread Galder Zamarreño
Sebastian, are you sure the namespace is the problem? The template seems to 
define a default value for namepsace [2].

Anyway, I've tried to pass a NAMESPACE value and I still the same WARN messages 
and no cluster formed.

Cheers,

[2] 
https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L338

> On 25 Sep 2017, at 13:18, Galder Zamarreño  wrote:
> 
> Hmmm, is there a way to say that if you don't pass in namespace, you take the 
> application name as namespace?
> 
>> On 25 Sep 2017, at 13:11, Sebastian Laskawiec  wrote:
>> 
>> Seems like you didn't fill the namespace parameter while creating an app: 
>> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L336
>> 
>> I already tried to eliminate this parameter (because it seems redundant) but 
>> currently there is no way to do it [1]. It s required for Role Binding which 
>> enables the Pod to query Kubernetes API and ask about Pods [2].
>> 
>> You may also try to use the third way:
>> oc policy add-role-to-user view system:serviceaccount:<> namespace>>:<>
>> 
>> [1] 
>> https://github.com/infinispan/infinispan-openshift-templates/pull/9#discussion_r131409849
>> [2] https://docs.openshift.com/enterprise/3.0/dev_guide/service_accounts.html
>> 
>> On Mon, Sep 25, 2017 at 12:54 PM Galder Zamarreño  wrote:
>> Hey Sebastian,
>> 
>> I've started 2 instances of Infinispan ephemeral [1] and they don't seem to 
>> cluster together with the pods showing this message:
>> 
>> 10:51:12,014 WARN  [org.jgroups.protocols.kubernetes.KUBE_PING] 
>> (jgroups-4,datagrid-1-187kx) failed getting JSON response from Kubernetes 
>> Client[masterUrl=https://172.30.0.1:443/api/v1, 
>> headers={Authorization=#MASKED:862#}, connectTimeout=5000, 
>> readTimeout=3, operationAttempts=3, operationSleep=1000, 
>> streamProvider=org.jgroups.protocols.kubernetes.stream.InsecureStreamProvider@51522f72]
>>  for cluster [cluster], namespace [openshift], labels 
>> [application=datagrid]; encountered [java.lang.Exception: 3 attempt(s) with 
>> a 1000ms sleep to execute [OpenStream] failed. Last failure was 
>> [java.io.IOException: Server returned HTTP response code: 403 for URL: 
>> https://172.30.0.1:443/api/v1/namespaces/openshift/pods?labelSelector=application%3Ddatagrid]]
>> 
>> These are the options I'm giving to the template:
>> 
>> oc process infinispan-ephemeral -p \
>>  NUMBER_OF_INSTANCES=2 \
>>  APPLICATION_NAME=datagrid \
>>  APPLICATION_USER=developer \
>>  APPLICATION_PASSWORD=developer
>> 
>> I'd expect this to work out of the box, or do you need to pass in a 
>> management usr/pwd for it to work?
>> 
>> Cheers,
>> 
>> [1] https://github.com/infinispan/infinispan-openshift-templates
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
> 
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Unable to cluster Infinispan ephemeral template instances

2017-09-25 Thread Galder Zamarreño
Hmmm, is there a way to say that if you don't pass in namespace, you take the 
application name as namespace?

> On 25 Sep 2017, at 13:11, Sebastian Laskawiec  wrote:
> 
> Seems like you didn't fill the namespace parameter while creating an app: 
> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L336
> 
> I already tried to eliminate this parameter (because it seems redundant) but 
> currently there is no way to do it [1]. It s required for Role Binding which 
> enables the Pod to query Kubernetes API and ask about Pods [2].
> 
> You may also try to use the third way:
> oc policy add-role-to-user view system:serviceaccount:< namespace>>:<>
> 
> [1] 
> https://github.com/infinispan/infinispan-openshift-templates/pull/9#discussion_r131409849
> [2] https://docs.openshift.com/enterprise/3.0/dev_guide/service_accounts.html
> 
> On Mon, Sep 25, 2017 at 12:54 PM Galder Zamarreño  wrote:
> Hey Sebastian,
> 
> I've started 2 instances of Infinispan ephemeral [1] and they don't seem to 
> cluster together with the pods showing this message:
> 
> 10:51:12,014 WARN  [org.jgroups.protocols.kubernetes.KUBE_PING] 
> (jgroups-4,datagrid-1-187kx) failed getting JSON response from Kubernetes 
> Client[masterUrl=https://172.30.0.1:443/api/v1, 
> headers={Authorization=#MASKED:862#}, connectTimeout=5000, readTimeout=3, 
> operationAttempts=3, operationSleep=1000, 
> streamProvider=org.jgroups.protocols.kubernetes.stream.InsecureStreamProvider@51522f72]
>  for cluster [cluster], namespace [openshift], labels [application=datagrid]; 
> encountered [java.lang.Exception: 3 attempt(s) with a 1000ms sleep to execute 
> [OpenStream] failed. Last failure was [java.io.IOException: Server returned 
> HTTP response code: 403 for URL: 
> https://172.30.0.1:443/api/v1/namespaces/openshift/pods?labelSelector=application%3Ddatagrid]]
> 
> These are the options I'm giving to the template:
> 
> oc process infinispan-ephemeral -p \
>   NUMBER_OF_INSTANCES=2 \
>   APPLICATION_NAME=datagrid \
>   APPLICATION_USER=developer \
>   APPLICATION_PASSWORD=developer
> 
> I'd expect this to work out of the box, or do you need to pass in a 
> management usr/pwd for it to work?
> 
> Cheers,
> 
> [1] https://github.com/infinispan/infinispan-openshift-templates
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Adjusting memory settings in template

2017-09-25 Thread Galder Zamarreño


> On 25 Sep 2017, at 12:37, Sebastian Laskawiec  wrote:
> 
> 
> 
> On Mon, Sep 25, 2017 at 11:58 AM Galder Zamarreño  wrote:
> 
> 
> > On 22 Sep 2017, at 17:58, Sebastian Laskawiec  wrote:
> >
> >
> >
> > On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero  
> > wrote:
> > On 22 September 2017 at 13:49, Sebastian Laskawiec  
> > wrote:
> > > It's very tricky...
> > >
> > > Memory is adjusted automatically to the container size [1] (of course you
> > > may override it by supplying Xmx or "-n" as parameters [2]). The safe 
> > > limit
> > > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap,
> > > that you can squeeze Infinispan much, much more).
> > >
> > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
> > > bustable memory category so if there is additional memory in the node, 
> > > we'll
> > > get it. But if not, we won't go below 512 MB (and 500 mCPU).
> >
> > I hope that's a temporary choice of the work in process?
> >
> > Doesn't sound acceptable to address real world requirements..
> > Infinispan expects users to estimate how much memory they will need -
> > which is hard enough - and then we should at least be able to start a
> > cluster to address the specified need. Being able to rely on 512MB
> > only per node would require lots of nodes even for small data sets,
> > leading to extreme resource waste as each node would consume some non
> > negligible portion of memory just to run the thing.
> >
> > hmmm yeah - its finished.
> >
> > I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or 
> > setting 50% of container memory?
> >
> > If the former and you set nothing, you will get the worse QoS and 
> > Kubernetes will shut your container in first order whenever it gets out of 
> > resources (I really recommend reading [4] and watching [3]). If the latter, 
> > yeah I guess we can tune it a little with off-heap but, as my the latest 
> > tests showed, if you enable RocksDB Cache Store, allocating even 50% is too 
> > much (the container got killed by OOM Killer). That's probably the reason 
> > why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So 
> > even setting it to 50% means that we take the risk...
> >
> > So TBH, I see no silver bullet here and I'm open for suggestions. IMO if 
> > you're really know what you're doing, you should set Xmx yourself (this 
> > will turn off setting Xmx automatically by the bootstrap script) and 
> > possibly set limits (and adjust requests) in your Deployment Configuration 
> > (if you set both requests and limits you will have the best QoS).
> 
> Try put it this way:
> 
> I've just started an Infinispan ephermeral instance and trying to load some 
> data and it's running out of memory. What knobs/settings does the template 
> offer to make sure I have a big enough Infinispan instance(s) to handle my 
> data?
> 
> Unfortunately calculating the number of instances based on input (e.g. "I 
> want to have 10 GB of space for my data, please calculate how many 1 GB 
> instances I need to create and adjust my app") is something that can not be 
> done with templates. Templates are pretty simple and they do not support any 
> calculations. You will probably need an Ansible Service Broker or Service 
> Broker SDK to do it.
> 
> So assuming you did the math on paper and you need 10 replicas, 1 GB each - 
> just type oc edit dc/ and modify number of replicas and increase 
> memory request. That should do the trick. Alternatively you can edit the 
> ConfigMap and turn eviction on (but it really depends on your use case).
> 
> BTW, the number of replicas is a parameter in template [1]. I can also expose 
> memory request if you want me to (in that case just shoot me a ticket: 
> https://github.com/infinispan/infinispan-openshift-templates/issues). And let 
> me say it one more time - I'm open for suggestions (and pull requests) if you 
> think this is not the way it should be done.

I don't know how the overarching OpenShift caching, or shared memory services 
will be exposed, as an OpenShift user that was to store data in Infinispan, I 
should be able to provide how much (total) data I will put on it, and 
optionally how many backups I want for the data, and OpenShift should maybe 
provide with some options on how to do this: 

User: I want 2gb of data
OpenShift: Assuming default of 1 backup (2 copies of data), I can offer you 
(assuming at least 25% overhead):

a) 2 node

[infinispan-dev] Unable to cluster Infinispan ephemeral template instances

2017-09-25 Thread Galder Zamarreño
Hey Sebastian,

I've started 2 instances of Infinispan ephemeral [1] and they don't seem to 
cluster together with the pods showing this message:

10:51:12,014 WARN  [org.jgroups.protocols.kubernetes.KUBE_PING] 
(jgroups-4,datagrid-1-187kx) failed getting JSON response from Kubernetes 
Client[masterUrl=https://172.30.0.1:443/api/v1, 
headers={Authorization=#MASKED:862#}, connectTimeout=5000, readTimeout=3, 
operationAttempts=3, operationSleep=1000, 
streamProvider=org.jgroups.protocols.kubernetes.stream.InsecureStreamProvider@51522f72]
 for cluster [cluster], namespace [openshift], labels [application=datagrid]; 
encountered [java.lang.Exception: 3 attempt(s) with a 1000ms sleep to execute 
[OpenStream] failed. Last failure was [java.io.IOException: Server returned 
HTTP response code: 403 for URL: 
https://172.30.0.1:443/api/v1/namespaces/openshift/pods?labelSelector=application%3Ddatagrid]]

These are the options I'm giving to the template:

oc process infinispan-ephemeral -p \
  NUMBER_OF_INSTANCES=2 \
  APPLICATION_NAME=datagrid \
  APPLICATION_USER=developer \
  APPLICATION_PASSWORD=developer

I'd expect this to work out of the box, or do you need to pass in a management 
usr/pwd for it to work?

Cheers,

[1] https://github.com/infinispan/infinispan-openshift-templates
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Adjusting memory settings in template

2017-09-25 Thread Galder Zamarreño


> On 22 Sep 2017, at 17:58, Sebastian Laskawiec  wrote:
> 
> 
> 
> On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero  wrote:
> On 22 September 2017 at 13:49, Sebastian Laskawiec  
> wrote:
> > It's very tricky...
> >
> > Memory is adjusted automatically to the container size [1] (of course you
> > may override it by supplying Xmx or "-n" as parameters [2]). The safe limit
> > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap,
> > that you can squeeze Infinispan much, much more).
> >
> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
> > bustable memory category so if there is additional memory in the node, we'll
> > get it. But if not, we won't go below 512 MB (and 500 mCPU).
> 
> I hope that's a temporary choice of the work in process?
> 
> Doesn't sound acceptable to address real world requirements..
> Infinispan expects users to estimate how much memory they will need -
> which is hard enough - and then we should at least be able to start a
> cluster to address the specified need. Being able to rely on 512MB
> only per node would require lots of nodes even for small data sets,
> leading to extreme resource waste as each node would consume some non
> negligible portion of memory just to run the thing.
> 
> hmmm yeah - its finished. 
> 
> I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or 
> setting 50% of container memory?
> 
> If the former and you set nothing, you will get the worse QoS and Kubernetes 
> will shut your container in first order whenever it gets out of resources (I 
> really recommend reading [4] and watching [3]). If the latter, yeah I guess 
> we can tune it a little with off-heap but, as my the latest tests showed, if 
> you enable RocksDB Cache Store, allocating even 50% is too much (the 
> container got killed by OOM Killer). That's probably the reason why setting 
> MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting 
> it to 50% means that we take the risk...
> 
> So TBH, I see no silver bullet here and I'm open for suggestions. IMO if 
> you're really know what you're doing, you should set Xmx yourself (this will 
> turn off setting Xmx automatically by the bootstrap script) and possibly set 
> limits (and adjust requests) in your Deployment Configuration (if you set 
> both requests and limits you will have the best QoS). 

Try put it this way:

I've just started an Infinispan ephermeral instance and trying to load some 
data and it's running out of memory. What knobs/settings does the template 
offer to make sure I have a big enough Infinispan instance(s) to handle my 
data? 

(Don't reply with: make your data smaller)

Cheers,

> 
> 
> Thanks,
> Sanne
> 
> >
> > Thanks,
> > Sebastian
> >
> > [1]
> > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> > [2]
> > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> > [4]
> > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
> >
> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño  wrote:
> >>
> >> Hi Sebastian,
> >>
> >> How do you change memory settings for Infinispan started via service
> >> catalog?
> >>
> >> The memory settings seem defined in [1], but this is not one of the
> >> parameters supported.
> >>
> >> I guess we want this as parameter?
> >>
> >> Cheers,
> >>
> >> [1]
> >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> >> --
> >> Galder Zamarreño
> >> Infinispan, Red Hat
> >>
> >
> > _______
> > infinispan-dev mailing list
> > infinispan-dev@lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Adjusting memory settings in template

2017-09-25 Thread Galder Zamarreño
I don't understand your reply here... are you talking about Infinispan 
instances deployed on OpenShift Online? Or on premise?

I can understand having some limits for OpenShift Online, but these templates 
should also be applicable on premise, in which case I should be able to easily 
define how much memory I want for the data grid, and the rest of the parameters 
would be worked out by OpenShift/Kubernetes?

To demand on premise users to go and change their template just to adjust the 
memory settings seems to me goes against all the usability improvements we're 
trying to achieve.

Cheers,

> On 22 Sep 2017, at 14:49, Sebastian Laskawiec  wrote:
> 
> It's very tricky...
> 
> Memory is adjusted automatically to the container size [1] (of course you may 
> override it by supplying Xmx or "-n" as parameters [2]). The safe limit is 
> roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that 
> you can squeeze Infinispan much, much more).
> 
> Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in 
> bustable memory category so if there is additional memory in the node, we'll 
> get it. But if not, we won't go below 512 MB (and 500 mCPU).
> 
> Thanks,
> Sebastian
> 
> [1] 
> https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> [2] 
> https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
> 
> On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño  wrote:
> Hi Sebastian,
> 
> How do you change memory settings for Infinispan started via service catalog?
> 
> The memory settings seem defined in [1], but this is not one of the 
> parameters supported.
> 
> I guess we want this as parameter?
> 
> Cheers,
> 
> [1] 
> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Adjusting memory settings in template

2017-09-22 Thread Galder Zamarreño
Hi Sebastian,

How do you change memory settings for Infinispan started via service catalog?

The memory settings seem defined in [1], but this is not one of the parameters 
supported.

I guess we want this as parameter?

Cheers,

[1] 
https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Why do we need separate Infinispan OpenShift template repo?

2017-09-19 Thread Galder Zamarreño
That sounds like a good idea. 

My main worry with how things are right now is that the config will get 
outdated and you need to keep in check not only with version changes, but any 
default behaviour changes we make.

I'm happy for it to be a temporary solution for now.

Cheers,

> On 19 Sep 2017, at 11:30, Sebastian Laskawiec  wrote:
> 
> Hey Galder,
> 
> That sounds like an interesting idea but let me give some more context and 
> propose other options...
> 
> So during the first iteration I wanted to create templates inside OpenShift 
> Template Library [1]. However it turned out that this repo works in a very 
> specific way - it pulls templates from other repositories and puts them in 
> one, single place. According to my knowledge there are plans to use it 
> OpenShift Online (I can tell you more offline).
> 
> This is why I came up with a separate repository only for templates and image 
> streams. When adding more and more features to the templates, my goal was to 
> externalize configuration into a ConfigMap. This makes it very convenient for 
> editing in OpenShift UI. The main problem is how to put it there? The easiest 
> way was to hardcode it inside a template (and I decided to go that way). But 
> a much more robust approach would be to spin up a small container (maybe an 
> Init Container??) that would pull proper version of Infinispan and use 
> Kubernetes REST API to create that ConfigMap on the fly. 
> 
> I'm not sure if putting templates into Infinispan repository would solve our 
> problems. Although granted, we would have an easy access to configuration but 
> still providing custom Docker image [2] (possibly with custom configuration) 
> is something I expect to happen frequently. Also I'm not a big fan of putting 
> many bits in a single repository.
> 
> So having said that, I believe the proper way is to implement a small 
> container (maybe an Init Container or just a script inside the same Docker 
> image) responsible for unpacking desired Infinispan package and creating 
> ConfigMap directly in Kubernetes. 
> 
> WDYT?
> 
> Thanks,
> Sebastian
> 
> [1] https://github.com/openshift/library
> [2] 
> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L376
> 
> On Tue, Sep 19, 2017 at 10:34 AM Tristan Tarrant  wrote:
> On 9/19/17 9:42 AM, Galder Zamarreño wrote:
> > Hi,
> >
> > I was looking at the Infinispan OpenShift template repo [1], and I started 
> > questioning why this repo contains Infinispan configurations for the cloud 
> > [2]. Shouldn't these be part of the Infinispan Server distribution? 
> > Otherwise this repo is going to somehow versioned depending on the 
> > Infinispan version...
> >
> > Which lead me to think, should repo [1] exist at all? Why aren't all its 
> > contents part of infinispan/infinispan? The only reason that I could think 
> > for keeping a different repo is maybe if you want to version it according 
> > to different OpenShift versions, but that could easily be achieved in 
> > infinispan/infinispan with different folders.
> 
> It was created separately because its release cycle can be much faster.
> Once things settle we can bring it in.
> 
> Tristan
> 
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Why do we need separate Infinispan OpenShift template repo?

2017-09-19 Thread Galder Zamarreño
Hi,

I was looking at the Infinispan OpenShift template repo [1], and I started 
questioning why this repo contains Infinispan configurations for the cloud [2]. 
Shouldn't these be part of the Infinispan Server distribution? Otherwise this 
repo is going to somehow versioned depending on the Infinispan version...

Which lead me to think, should repo [1] exist at all? Why aren't all its 
contents part of infinispan/infinispan? The only reason that I could think for 
keeping a different repo is maybe if you want to version it according to 
different OpenShift versions, but that could easily be achieved in 
infinispan/infinispan with different folders.

Cheers,

[1] https://github.com/infinispan/infinispan-openshift-templates
[2] 
https://github.com/infinispan/infinispan-openshift-templates/blob/master/configurations/cloud-ephemeral.xml
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] How about moving Infinispan forums to StackOverflow?

2017-09-14 Thread Galder Zamarreño
Sounds like a good idea, and I've considered it for previous projects I worked 
on, but I remember having some downsides. I'd suggest checking with Mark Newton 
(at Red Hat).

Cheers,

> On 8 Sep 2017, at 09:51, Tristan Tarrant  wrote:
> 
> Yes, I think it would be a good idea. I've seen a number of users post 
> in both places, but SO is definitely more discoverable by the wider 
> community and has a lower barrier to entry.
> 
> Tristan
> 
> On 9/8/17 9:04 AM, Sebastian Laskawiec wrote:
>> Hey guys,
>> 
>> I'm pretty sure you have seen: https://developer.jboss.org/thread/275956
>> 
>> How about moving Infinispan questions too?
>> 
>> Thanks,
>> Sebastian
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
> 
> -- 
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Hot Rod secured by default

2017-09-14 Thread Galder Zamarreño
Gustavo's reply was the agreement reached. Secured by default and an easy way 
to use it unsecured is the best middle ground IMO.

So, we've done the securing part partially, which needs to be completed by [2] 
(currently assigned to Tristan).

More importantly, we also need to complete [3] so that we have ship the 
unsecured configuration, and then show people how to use that (docus, 
examples...etc).

If you want to help, taking ownership of [3] would be best.

Cheers,

[2] https://issues.jboss.org/browse/ISPN-7815
[3] https://issues.jboss.org/browse/ISPN-7818

> On 6 Sep 2017, at 11:03, Katia Aresti  wrote:
> 
> @Emmanuel, sure it't not a big deal, but starting fast and smooth without any 
> trouble helps adoption. Concerning the ticket, there is already one that was 
> acted. I can work on that, even if is assigned to Galder now. 
> 
> @Gustavo
> Yes, as I read - better - now on the security part, it is said for the 
> console that we need those. My head skipped that paragraph or I read that 
> badly, and I was wondering if it was more something related to "roles" rather 
> than a user. My bad, because I read too fast sometimes and skip things ! 
> Maybe the paragraph of the security in the console should be moved down to 
> the console part, which is small to read now ?  When I read there "see the 
> security part bellow" I was like : ok, the security is done !! :) 
> 
> Thank you for your replies !
> 
> Katia
> 
> 
> On Wed, Sep 6, 2017 at 10:52 AM, Gustavo Fernandes  
> wrote:
> Comments inlined
> 
> On Tue, Sep 5, 2017 at 5:03 PM, Katia Aresti  wrote:
> And then I want to go to the console, requires me to put again the 
> user/password. And it does not work. And I don't see how to disable security. 
> And I don't know what to do. And I'm like : why do I need security at all 
> here ?
> 
> 
> The console credentials are specified with MGMT_USER/MGMT_PASS env variables, 
> did you try those? It will not work for APP_USER/APP_PASS.
> 
>  
> I wonder if you want to reconsider the "secured by default" point after my 
> experience. 
> 
> 
> The outcome of the discussion is that the clustered.xml will be secured by 
> default, but you should be able to launch a container without any security by 
> simply passing an alternate xml in the startup, and we'll ship this XML with 
> the server. 
> 
> 
> Gustavo
>  
> 
> My 2 cents,
> 
> Katia
> 
> On Tue, May 9, 2017 at 2:24 PM, Galder Zamarreño  wrote:
> Hi all,
> 
> Tristan and I had chat yesterday and I've distilled the contents of the 
> discussion and the feedback here into a JIRA [1]. The JIRA contains several 
> subtasks to handle these aspects:
> 
> 1. Remove auth check in server's CacheDecodeContext.
> 2. Default server configuration should require authentication in all entry 
> points.
> 3. Provide an unauthenticated configuration that users can easily switch to.
> 4. Remove default username+passwords in docker image and instead show an 
> info/warn message when these are not provided.
> 5. Add capability to pass in app user role groups to docker image easily, so 
> that its easy to add authorization on top of the server.
> 
> Cheers,
> 
> [1] https://issues.jboss.org/browse/ISPN-7811
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
> > On 19 Apr 2017, at 12:04, Tristan Tarrant  wrote:
> >
> > That is caused by not wrapping the calls in PrivilegedActions in all the
> > correct places and is a bug.
> >
> > Tristan
> >
> > On 19/04/2017 11:34, Sebastian Laskawiec wrote:
> >> The proposal look ok to me.
> >>
> >> But I would also like to highlight one thing - it seems you can't access
> >> secured cache properties using CLI. This seems wrong to me (if you can
> >> invoke the cli, in 99,99% of the cases you have access to the machine,
> >> so you can do whatever you want). It also breaks healthchecks in Docker
> >> image.
> >>
> >> I would like to make sure we will address those concerns.
> >>
> >> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant  >> <mailto:ttarr...@redhat.com>> wrote:
> >>
> >>Currently the "protected cache access" security is implemented as
> >>follows:
> >>
> >>- if authorization is enabled || client is on loopback
> >>allow
> >>
> >>The first check also implies that authentication needs to be in place,
> >>as the authorization checks need a valid Subject.
> >>
> >>Unfortunately authorization is very heavy-weight and actua

[infinispan-dev] DevNation Live talk - Big Data In Action w/ Infinispan

2017-09-14 Thread Galder Zamarreño
Hi,

Last week I gave a 30m talk for DevNation Live on Big Data In Action w/ 
Infinispan.

The video can be found here:
https://www.youtube.com/watch?v=ZUZeAfdmeX0

Slides:
https://speakerdeck.com/galderz/big-data-in-action-with-infinispan-2

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Join us online on 7th September for DevNation talk on Infinispan!

2017-09-06 Thread Galder Zamarreño
Hi all,

I will be doing an live tech talk for DevNation tomorrow, 7th September at 
12:00pm. More details here:

http://blog.infinispan.org/2017/09/join-us-online-on-7th-september-for.html

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Smoke test suite for hibernate-cache

2017-09-01 Thread Galder Zamarreño
Hey Martin,

Thanks for working on this. I'd suggest these:

- NaturalIdInvalidationTest
- EntityRegionAccessStrategyTest
- CollectionRegionAccessStrategyTest
- QueryRegionImplTest
- TimestampsRegionImplTest

They cover most of the functionality offer, but they run in more than just a 
few seconds... Each of those normally cycle through different configuration 
options, both at Hibernate and Infinispan level, so that's why they take more 
than just a few seconds.

Try those and see what you think.

Cheers,

> On 18 Aug 2017, at 14:26, Martin Gencur  wrote:
> 
> Hi all,
> I'm currently in the process of refreshing the "smoke" test suite for 
> Infinispan.
> There's a relatively new module called hibernate-cache. Could someone 
> suggest tests that should be part of the smoke test suite?
> Ideally just a few tens of test cases (maybe a few hundreds at most but 
> the test suite execution should finish in a few seconds).
> 
> A list of test classes as a reply to this email would be ideal:)
> 
> Thanks,
> Martin
> 
> 
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] A tool for adjusting configuration

2017-08-28 Thread Galder Zamarreño
Before you start messing with XML itself, you might want to see what we do in 
Hibernate 2L.

We first load the XML configuration [1], and using the 
ConfigurationBuilderHolder we can swap cache configurations, we can tweak 
them...etc, and eventually we create a cache manager out of that.

From the tweaked configuration, you could potentiallt chuck out XML. That's a 
better approach IMO than using XPath or similar tech.

Cheers,

[1] 
https://github.com/infinispan/infinispan/blob/master/hibernate-cache/src/main/java/org/infinispan/hibernate/cache/InfinispanRegionFactory.java#L547

> On 28 Aug 2017, at 13:41, Sebastian Laskawiec  wrote:
> 
> Hey,
> 
> Our cloud integration bits require a tool for adjusting the configuration for 
> certain use cases. A common example would be - take this `cloud.xml` file, 
> remove all caches, add a new, replicated cache as default one.
> 
> The tool should take either configuration or a file name as input (e.g. 
> `config-tool --add-default-cache -f cloud.xml` or `cat cloud.xml | 
> config-tool --add-default-cache > cloud-new.xml`) and print out configuration 
> either to System Out or to a file.
> 
> Do you have any ideas what could I use to write such a tool? Those 
> technologies come into my mind:
>   • Perl
>   • Python
>   • Java (probably with some XPath library)
> Thoughts? Ideas? Recommendations?
> 
> Thanks,
> Sebastian
> -- 
> SEBASTIAN ŁASKAWIEC
> INFINISPAN DEVELOPER
> Red Hat EMEA
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Hot Rod client sending data to itself - ISPN-8186

2017-08-14 Thread Galder Zamarreño
I did run a local test and indeed you get a bind exception if trying to bind a 
local port that's in use as server port:

https://github.com/galderz/java-sandbox/blob/master/src/main/java/j/net/LocalPortClash.java

I'll check JGRP source and JIRA and try to dig this further.

Cheers,

> On 14 Aug 2017, at 08:48, Bela Ban  wrote:
> 
> Right: the localHost:localPort combo of the client socket cannot be the 
> same as that of the remoteHost:remotePort.
> 
> Do you happen to have the link to the JGroups issue? I also remember 
> this, but googling I failed to find it. Perhaps we can use the same 
> solution here that we used for the JGRP issue.
> 
> I vaguely recall we checked the client's local address:port against some 
> server address:port and closed/re-created it is the same.
> 
> On 11/08/17 20:56, Dennis Reed wrote:
>> On 08/11/2017 11:50 AM, Galder Zamarreño wrote:
>>> I must admit this scenario sounds very weird... how does Java allow you for 
>>> a local port to be bound to a port that's already in use by the server? It 
>>> doesn't make sense.
>> 
>> You cannot bind to a port that's already in use.
>> 
>> But if you're trying to connect to a port in the ephemeral range that's
>> not in use, and the OS happens to assign that same IP:port to the local
>> socket, it can connect to itself.
>> 
>> (We've run into this in JGroups before, and it was a pain to track down
>> what was going on).
>> 
>> -Dennis
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
> 
> -- 
> Bela Ban | http://www.jgroups.org
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Hot Rod client sending data to itself - ISPN-8186

2017-08-11 Thread Galder Zamarreño


> On 11 Aug 2017, at 16:30, Sanne Grinovero  wrote:
> 
> On 11 August 2017 at 14:14, Galder Zamarreño  wrote:
>> Hi,
>> 
>> Re: https://issues.jboss.org/browse/ISPN-8186
>> 
>> I've been looking at TRACE logs and what seems to happen is that is that 
>> sometimes, when the client needs to create a new Socket, it sends using the 
>> same localport as the Hot Rod server port. As a result, when the client 
>> sends something to the server, it also receives it, hence it ends finding a 
>> request instead of a response. Analysis of the logs linked in the JIRA can 
>> be found in [1].
>> 
>> What I'm not sure about is how to fix this... There are ways to potentially 
>> pass a specific localport to a Socket [2] but this could be a bit messy: 
>> It'd require us to generate a random local port and see if that works, 
>> making sure that's not the server port...
>> 
>> However, I think the real problem we're having here is the fact that both 
>> the server and client are bound to same IP address, 127.0.0.1. A simpler 
>> solution could be a way to get the server to be in a different IP address to 
>> the client, but what would that be that IP address and how to make sure it 
>> always works? Bind the server to eth0?
>> 
>> Any other ideas?
> 
> You could create multiple aliases for the same loopback device, and
> assign a different IP address to each of them.
> 
> But I fail to understand why you don't have specific ports for each
> purpose? That's the point for using ports in the first place, no?

^ Hmmm? 

The servers in the test use a random port that's available. The clients connect 
to these ports. The local ports used by the clients are random. You need to use 
APIs such as [2] to fix them.

So, what exactly are you talking about? Are you saying we should fix the local 
client ports? As I said in the first post, we could try to find a random port 
that's not the server one...

I must admit this scenario sounds very weird... how does Java allow you for a 
local port to be bound to a port that's already in use by the server? It 
doesn't make sense. I'll be trying to replicate this in a small unit test next 
few days...

Cheers,

> 
> Thanks,
> Sanne
> 
> 
>> 
>> Cheers,
>> 
>> [1] https://gist.github.com/galderz/b8549259ff65cb74505c9268eeec96a7
>> [2] 
>> http://docs.oracle.com/javase/6/docs/api/java/net/Socket.html#Socket(java.net.InetAddress,%20int,%20java.net.InetAddress,%20int)
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>> 
>> _______
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Hot Rod client sending data to itself - ISPN-8186

2017-08-11 Thread Galder Zamarreño
Hi,

Re: https://issues.jboss.org/browse/ISPN-8186

I've been looking at TRACE logs and what seems to happen is that is that 
sometimes, when the client needs to create a new Socket, it sends using the 
same localport as the Hot Rod server port. As a result, when the client sends 
something to the server, it also receives it, hence it ends finding a request 
instead of a response. Analysis of the logs linked in the JIRA can be found in 
[1].

What I'm not sure about is how to fix this... There are ways to potentially 
pass a specific localport to a Socket [2] but this could be a bit messy: It'd 
require us to generate a random local port and see if that works, making sure 
that's not the server port...

However, I think the real problem we're having here is the fact that both the 
server and client are bound to same IP address, 127.0.0.1. A simpler solution 
could be a way to get the server to be in a different IP address to the client, 
but what would that be that IP address and how to make sure it always works? 
Bind the server to eth0? 

Any other ideas?

Cheers,

[1] https://gist.github.com/galderz/b8549259ff65cb74505c9268eeec96a7
[2] 
http://docs.oracle.com/javase/6/docs/api/java/net/Socket.html#Socket(java.net.InetAddress,%20int,%20java.net.InetAddress,%20int)
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Late end invalidation delivery stops put from load - ISPN-8114

2017-08-03 Thread Galder Zamarreño
FYI: https://github.com/infinispan/infinispan/pull/5350

--
Galder Zamarreño
Infinispan, Red Hat

> On 3 Aug 2017, at 11:34, Galder Zamarreño  wrote:
> 
> Thx Radim, I'll look into wrapping PerCacheInboundInvocationHandler.
> 
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
>> On 2 Aug 2017, at 18:41, Radim Vansa  wrote:
>> 
>> On 08/02/2017 01:02 PM, Galder Zamarreño wrote:
>>> Hey Radim,
>>> 
>>> Re: https://issues.jboss.org/browse/ISPN-8114
>>> 
>>> I've been looking at the trace logs of this failure. I've extracted the 
>>> most interesting parts of this failure into [1].
>>> 
>>> What happens is that after loading the entries into the cache, the end 
>>> invalidation message to allow put from loads to succeed does not get 
>>> executed in time before the put from load is attempted. As a result of 
>>> this, the put from load does not happen and hence the entry is not loaded 
>>> into the cache.
>>> 
>>> The end invalidation message eventually gets through. There's a gap between 
>>> receiving the JGroups message and the actual execution, but that's due to 
>>> the delivery mode of the message.
>>> 
>>> I'm not sure how we should fix this. Options:
>>> 
>>> 1) A thread sleep before loading entries "might work" but for a CI test 
>>> this could always backfire with the right timing sets.
>>> 
>>> 2) Find a way to hook into the PFLValidator class and only load after we 
>>> know end invalidation has been received by all nodes.
>>> 
>>> 3) Make end invalidation message sync? This would expensive. Even with 
>>> async, changing delivery mode might have worked here... but under the right 
>>> circumstances you could still get the same issue with async.
>> 
>> 1) is unreliable and wrong from a testsuite perspective, and 3) is 
>> completely wrong (making sth sync just because it's easier to test it that 
>> way).
>> 
>> Spying on PFVL is an option, but I would rather wrap 
>> PerCacheInboundInvocationHandler (I hope that's the correct way). Note that 
>> there's even a TestingUtil.wrapInboundInvocationHandler() helper method.
>> 
>> If I am missing the complexity, please elaborate.
>> 
>> Radim
>> 
>>> I'm keen on trying to find a potential solution using 2), but wondered if 
>>> you have other ideas.
>>> 
>>> Cheers,
>>> 
>>> [1] https://gist.github.com/galderz/0bce6dce16de018375e43e25c0cf3913
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>> 
>> 
>> 
>> -- 
>> Radim Vansa 
>> JBoss Performance Team
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Late end invalidation delivery stops put from load - ISPN-8114

2017-08-03 Thread Galder Zamarreño
Thx Radim, I'll look into wrapping PerCacheInboundInvocationHandler.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 2 Aug 2017, at 18:41, Radim Vansa  wrote:
> 
> On 08/02/2017 01:02 PM, Galder Zamarreño wrote:
>> Hey Radim,
>> 
>> Re: https://issues.jboss.org/browse/ISPN-8114
>> 
>> I've been looking at the trace logs of this failure. I've extracted the most 
>> interesting parts of this failure into [1].
>> 
>> What happens is that after loading the entries into the cache, the end 
>> invalidation message to allow put from loads to succeed does not get 
>> executed in time before the put from load is attempted. As a result of this, 
>> the put from load does not happen and hence the entry is not loaded into the 
>> cache.
>> 
>> The end invalidation message eventually gets through. There's a gap between 
>> receiving the JGroups message and the actual execution, but that's due to 
>> the delivery mode of the message.
>> 
>> I'm not sure how we should fix this. Options:
>> 
>> 1) A thread sleep before loading entries "might work" but for a CI test this 
>> could always backfire with the right timing sets.
>> 
>> 2) Find a way to hook into the PFLValidator class and only load after we 
>> know end invalidation has been received by all nodes.
>> 
>> 3) Make end invalidation message sync? This would expensive. Even with 
>> async, changing delivery mode might have worked here... but under the right 
>> circumstances you could still get the same issue with async.
> 
> 1) is unreliable and wrong from a testsuite perspective, and 3) is completely 
> wrong (making sth sync just because it's easier to test it that way).
> 
> Spying on PFVL is an option, but I would rather wrap 
> PerCacheInboundInvocationHandler (I hope that's the correct way). Note that 
> there's even a TestingUtil.wrapInboundInvocationHandler() helper method.
> 
> If I am missing the complexity, please elaborate.
> 
> Radim
> 
>> I'm keen on trying to find a potential solution using 2), but wondered if 
>> you have other ideas.
>> 
>> Cheers,
>> 
>> [1] https://gist.github.com/galderz/0bce6dce16de018375e43e25c0cf3913
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
> 
> 
> -- 
> Radim Vansa 
> JBoss Performance Team


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Late end invalidation delivery stops put from load - ISPN-8114

2017-08-02 Thread Galder Zamarreño
Hey Radim,

Re: https://issues.jboss.org/browse/ISPN-8114

I've been looking at the trace logs of this failure. I've extracted the most 
interesting parts of this failure into [1].

What happens is that after loading the entries into the cache, the end 
invalidation message to allow put from loads to succeed does not get executed 
in time before the put from load is attempted. As a result of this, the put 
from load does not happen and hence the entry is not loaded into the cache.

The end invalidation message eventually gets through. There's a gap between 
receiving the JGroups message and the actual execution, but that's due to the 
delivery mode of the message.

I'm not sure how we should fix this. Options:

1) A thread sleep before loading entries "might work" but for a CI test this 
could always backfire with the right timing sets.

2) Find a way to hook into the PFLValidator class and only load after we know 
end invalidation has been received by all nodes.

3) Make end invalidation message sync? This would expensive. Even with async, 
changing delivery mode might have worked here... but under the right 
circumstances you could still get the same issue with async.

I'm keen on trying to find a potential solution using 2), but wondered if you 
have other ideas.

Cheers,

[1] https://gist.github.com/galderz/0bce6dce16de018375e43e25c0cf3913
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Docker image authentication

2017-07-26 Thread Galder Zamarreño
Looks great Sebastian! Great work :)

--
Galder Zamarreño
Infinispan, Red Hat

> On 13 Jul 2017, at 12:14, Sebastian Laskawiec  wrote:
> 
> Hey guys,
> 
> I just wanted to give you a heads on some breaking change on our Docker 
> image: https://github.com/jboss-dockerfiles/infinispan/pull/55
> 
> After that PR gets merged, the application and management user/password pairs 
> could be specified via environmental variables, passed into bootstrap script 
> as parameters or autogenerated. Note there is no pre-configured user/password 
> as it was before. 
> 
> Please let me know if you have any questions.
> 
> Thanks,
> Sebastian
> 
> 
> -- 
> SEBASTIAN ŁASKAWIEC
> INFINISPAN DEVELOPER
> Red Hat EMEA
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Conflict Manager and Partition Handling Blog

2017-07-25 Thread Galder Zamarreño
Oh, if we can't find a simple tutorial for it, there's always 
https://github.com/infinispan-demos :)

--
Galder Zamarreño
Infinispan, Red Hat

> On 25 Jul 2017, at 17:11, Galder Zamarreño  wrote:
> 
> One more thing: have you thought how we could have a simple tutorial on this 
> feature?
> 
> It'd be great to find a simple, reduced, example to show it off :)
> 
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
>> On 25 Jul 2017, at 16:54, Galder Zamarreño  wrote:
>> 
>> Hey Ryan,
>> 
>> Very detailed blog post! Great work on both the post and the feature! :D
>> 
>> While reading, the following question came to my mind: how does Infinispan 
>> determine there's a conflict? Does it rely on .equals() based equality?
>> 
>> A follow up would be: whether in the future this could be pluggable, e.g. 
>> when comparing a version field is enough to realise there's a conflict. As 
>> opposed of relying in .equals(), if that's what's being used inside :)
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>>> On 17 Jul 2017, at 14:16, Ryan Emerson  wrote:
>>> 
>>> Hi Everyone,
>>> 
>>> Here's a blog post on the introduction of ConflictManager and the recent 
>>> changes to partition handling. 
>>> 
>>> http://blog.infinispan.org/2017/07/conflict-management-and-partition.html
>>> 
>>> Cheers
>>> Ryan
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
> 


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Conflict Manager and Partition Handling Blog

2017-07-25 Thread Galder Zamarreño
One more thing: have you thought how we could have a simple tutorial on this 
feature?

It'd be great to find a simple, reduced, example to show it off :)

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 25 Jul 2017, at 16:54, Galder Zamarreño  wrote:
> 
> Hey Ryan,
> 
> Very detailed blog post! Great work on both the post and the feature! :D
> 
> While reading, the following question came to my mind: how does Infinispan 
> determine there's a conflict? Does it rely on .equals() based equality?
> 
> A follow up would be: whether in the future this could be pluggable, e.g. 
> when comparing a version field is enough to realise there's a conflict. As 
> opposed of relying in .equals(), if that's what's being used inside :)
> 
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
>> On 17 Jul 2017, at 14:16, Ryan Emerson  wrote:
>> 
>> Hi Everyone,
>> 
>> Here's a blog post on the introduction of ConflictManager and the recent 
>> changes to partition handling. 
>> 
>> http://blog.infinispan.org/2017/07/conflict-management-and-partition.html
>> 
>> Cheers
>> Ryan
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Conflict Manager and Partition Handling Blog

2017-07-25 Thread Galder Zamarreño
Hey Ryan,

Very detailed blog post! Great work on both the post and the feature! :D

While reading, the following question came to my mind: how does Infinispan 
determine there's a conflict? Does it rely on .equals() based equality?

A follow up would be: whether in the future this could be pluggable, e.g. when 
comparing a version field is enough to realise there's a conflict. As opposed 
of relying in .equals(), if that's what's being used inside :)

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 17 Jul 2017, at 14:16, Ryan Emerson  wrote:
> 
> Hi Everyone,
> 
> Here's a blog post on the introduction of ConflictManager and the recent 
> changes to partition handling. 
> 
> http://blog.infinispan.org/2017/07/conflict-management-and-partition.html
> 
> Cheers
> Ryan
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Important feedback for transcoding work - Re: Quick fix for ISPN-7710

2017-07-25 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 19 Jun 2017, at 13:17, Dan Berindei  wrote:
> 
> On Fri, Jun 16, 2017 at 1:07 PM, Galder Zamarreño  wrote:
>> 
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>>> On 15 Jun 2017, at 15:25, Adrian Nistor  wrote:
>>> 
>>> Galder, I've seen AddProtobufTask in March or April when you mentioned this 
>>> issue on the devlist; that approach can work for protostream marshallers or 
>>> any other code bits that the Cache does not depend on during startup, and 
>>> which can be deployed anytime later. In this category we currently have : 
>>> filters, converters. These are currently deployed with the help of a 
>>> DeploymentUnitProcessor, but we could have done it using a ServerTask as 
>>> well.
>> 
>> ^ I'm not sure we had ServerTasks in place when we had filters and 
>> converters... But if we had server tasks then, we should have used that 
>> path. My bad if we didn't do it :\
>> 
>>> Now that we took the route of DUP, I think we should continue in a 
>>> consistent manner and use it for other 'deployables' we identify from now 
>>> on, ie. the protobuf entity marshallers.
>> 
>> ^ Having written DUPs, I consider them to be a royal PITA. So, I'm against 
>> that.
>> 
>>> As for encoders, lucene analyzers, compatibility marshaller, event 
>>> marshaller - these are all needed during cache startup. We need to come up 
>>> with something for these, so I propose to look them up using the 
>>> "moduleId:slot:className" convention.
>> 
>> As I see it, encoders/compatibility-marshaller/event-marshaller might not be 
>> needed on startup. If data is kept in binary and only deserialized lazily 
>> when needed, you only need them when you're going to do what you need...
>> 
> 
> What if you start a node and a client immediately tries to register an
> even listener?

If the event listener server side requires any deserialization, I'd expect the 
node on startup to have a way to load the encoder to be used, either via config 
or a server tasks that's deployed by the user or pre-registered by the server.

> 
> Not sure about the others, but for the lucene analyzers, I assume some
> configurations will have to analyze/index entries that we receive via
> state transfer during startup.

Good point. This is a use case where unmarshalling/deserialization/decoding 
would be required on startup, to be able to index data.

> 
> Dan
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] On the scattered cache blog post

2017-07-04 Thread Galder Zamarreño
Hey Radim,

Awesome blog post on scattered cache [1]!

I think there's some extra information to be added or to be clarified in the 
blog itself:

1. From what I understand, scattered cache should help the embedded use case 
primarily? When using Hot Rod, the primary owner is always hit, so the penalty 
of landing in a non-owner and having to do 2 RPCs is not there. Am I right? 
This should be clarified in the blog post.

2. "As you can see, this algorithm cannot be easily extended to multiple 
owners" <- Do you mean users should never set num owners to 3 or higher? How 
would the system work if num owners was 1?

Some of these questions might have been answered in the design doc, but as a 
user, I should not be expected to read the design document to answer these 
questions.

If these questions are answered in the user documentation, that would be fine 
but I feel these are things that should be explained/clarified in the blog post 
itself.

Cheers,

[1] http://blog.infinispan.org/2017/07/scattered-cache.html
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Feedback for PR 5233 needed

2017-07-03 Thread Galder Zamarreño
I already explained in another email thread, but let me make it explicit here:

The way compatibility mode works has a big influence on how useful redeploying 
marshallers is.

If compatibility is lazy, redeployment of marshaller could be useful since all 
the conversions happen lazily. So, conversions would only happen when data is 
requested. So, if data comes from Hot Rod in byte[], only when reading it might 
be converted into a POJO. If data comes as POJO, say from embedded, you'd keep 
it as is, and only when read from Hot Rod you'd convert to binary.

If compatibility is eager, the conversion happens on write, and that can be 
have negative impact if marshaller is redeployed. If data has been unmarshalled 
with marshaller A, and then you deploy marshaller B, it might result in 
converting the unmarshalled POJO into a binary format that the client can't 
understand.

So, IMO, if compat mode is lazy, redeployment could work... but I think 
redeployments add a layer of complexity that users might not really need it. 
I'd rather not have redeployments and instead of focus on rolling upgrade or 
freezing capabilities like Tristan mention to be able to bring a server down 
and up wo/ issues for the user. 

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 3 Jul 2017, at 09:52, Tristan Tarrant  wrote:
> 
> I like it a lot.
> To follow up on my comment on the PR, but to get a wider distribution, 
> we really need to think about how to deal with redeployments and 
> resource restarts.
> I think restarts are unavoidable: a redeployment means dumping and 
> replacing a classloader with all of its classes. There are two 
> approaches I can think of:
> 
> - "freezing" and "thawing" a cache via some form of persistence (which 
> could also mean adding a temporary cache store
> - separate the wildfly service lifecycle from the cache lifecycle, 
> detaching/reattaching a cache without stopping when the wrapping service 
> is restarted.
> 
> Tristan
> 
> On 6/29/17 5:20 PM, Adrian Nistor wrote:
>> People, don't be shy, the PR is in now, but things can still change
>> based on you feedback. We still have two weeks until we release the Final.
>> 
>> On 06/29/2017 03:45 PM, Adrian Nistor wrote:
>>> This pr [1] adds a new approach for defining the compat marshaller class
>>> and the indexed entity classes (in server), and the same approach could
>>> be used in future for deployment of encoders,  lucene analyzers and
>>> possilby other code bits that a user would want to add a server in order
>>> to implement an extension point that we support.
>>> 
>>> Your feedback is wellcome!
>>> 
>>> [1] https://github.com/infinispan/infinispan/pull/5233
>>> 
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
> 
> -- 
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Reactive Streams + RxJava

2017-06-20 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 19 Jun 2017, at 15:25, William Burns  wrote:
> 
> 
> 
> On Mon, Jun 19, 2017 at 8:34 AM Emmanuel Bernard  
> wrote:
> You’re thinking about a pure implementation play, correct? RxJava or the 
> Reactive Stream constructs would not be exposed to the user as API. Am I 
> correct?
> 
> Yes, that is correct. This is only for internal usage. My thought is maybe 
> when we finally move to Java 9, we may expose the Flow API at some point, but 
> nothing any time soon.

^ Ok, that wasn't clear in the original email.

If it's only for internal usage and no RxJava APIs are exposed, then I'm fine 
with it.

>  
> Also for posterity, we had backchannel chats about it and you said you felt 
> vert.x was not necessarily addressing your needs. Could you elaborate a bit 
> here?
> 
> The main difference is that RxJava [1] has a full suite of utility methods 
> around reactive streams [2]. Vert.x has some APIs around reactive streams 
> [3], but it is rather limited.
> 
> RxJava is about publishing/consuming your own streams without tying to any 
> threading model (everything can be done on main thread for example). And it 
> provides methods of mapping streams to others and even returning blocking 
> APIs back to the user. Some methods in particular I was looking at were 
> merge, delay, blockingIterable, rebatch and some others.
> 
> Vert.x is really beneficial with reactive programming built upon existing 
> tools (HTTP, JDBC and others). It also has the event bus and others but this 
> is built on the vert.x event loop, which we just don't need in Infinispan. We 
> already have our own threading model and communication over JGroups.
> 
> My usage is to handle multiple streams of data coming from different 
> locations and merging them together and doing some additional 
> transformations. I could do this with either of the API, however RxJava 
> already did a lot of heavy lifting I would have to do otherwise.
> 
> Also another side note is that it might be helpful for vert.x to change over 
> to something more like the upcoming Flow API from Java 9 (maybe [4] which is 
> pretty much identical currently). RxJava uses this as a base class for it's 
> Publish/Subscribe.
> 
> [1] https://github.com/ReactiveX/RxJava/tree/2.x/src/main/java/io/reactivex
> [2] 
> http://reactivex.io/RxJava/2.x/javadoc/io/reactivex/Flowable.html#blockingIterable()
> [3] 
> https://github.com/eclipse/vert.x/tree/master/src/main/java/io/vertx/core/streams
> [4] 
> https://mvnrepository.com/artifact/org.reactivestreams/reactive-streams/1.0.0
>  
> 
> Emmanuel
> 
>> On 15 Jun 2017, at 23:20, William Burns  wrote:
>> 
>> I was thinking more about [1] and I find that I was going to implement 
>> basically reactive streams. What we have now in github is similar but it 
>> uses a very crude method of blocking the thread to prevent back pressure. 
>> This can then cause severe issues as many users have found out when they 
>> don't close iterator.
>> 
>> Unfortunately reactive streams is just a spec. I am proposing to add RxJava 
>> [2] as a dependency [2] in the core module to provide access to reactive 
>> streams and the various conversion methods. This library adds a bunch of 
>> support for built in back pressure, transformations and much more which 
>> would reduce the amount of code I would need to write substantially.
>> 
>> In regards to timing, I am thinking this is too close for 9.1, so maybe 9.2 
>> or higher.
>> 
>> What do you guys think?
>> 
>> [1] https://issues.jboss.org/browse/ISPN-7865
>> [2] https://github.com/ReactiveX/RxJava
>> [3] https://mvnrepository.com/artifact/io.reactivex.rxjava2/rxjava/2.1.0
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Reactive Streams + RxJava

2017-06-20 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 15 Jun 2017, at 23:20, William Burns  wrote:
> 
> I was thinking more about [1] and I find that I was going to implement 
> basically reactive streams. What we have now in github is similar but it uses 
> a very crude method of blocking the thread to prevent back pressure. This can 
> then cause severe issues as many users have found out when they don't close 
> iterator.
> 
> Unfortunately reactive streams is just a spec.

Not just as spec. Java 9 will come with a set of interfaces that allows you to 
implement reactive streams:
http://download.java.net/java/jdk9/docs/api/java/util/concurrent/Flow.html

> I am proposing to add RxJava [2] as a dependency [2] in the core module to 
> provide access to reactive streams and the various conversion methods. This 
> library adds a bunch of support for built in back pressure, transformations 
> and much more which would reduce the amount of code I would need to write 
> substantially.

I don't think I like the idea of tying our core module to RxJava. Back when I 
was doing the functional map API we did consider using something like RxJava 
for the async multi-value streams, but decided against it to avoid tying 
ourselves to that.

A better alternative, if the APIs fit your use case, would be to depend on Java 
9 APIs (oswego might have Java 8 version of those interfaces) and use maybe 
RxJava as the implementation of that API, if there's a suitable implementation 
for it.

If there's no choice but to use RxJava for the API part, I'd rather that 
happens in separate module to core.

Cheers,

> 
> In regards to timing, I am thinking this is too close for 9.1, so maybe 9.2 
> or higher.
> 
> What do you guys think?
> 
> [1] https://issues.jboss.org/browse/ISPN-7865
> [2] https://github.com/ReactiveX/RxJava
> [3] https://mvnrepository.com/artifact/io.reactivex.rxjava2/rxjava/2.1.0
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Important feedback for transcoding work - Re: Quick fix for ISPN-7710

2017-06-16 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 15 Jun 2017, at 15:25, Adrian Nistor  wrote:
> 
> Galder, I've seen AddProtobufTask in March or April when you mentioned this 
> issue on the devlist; that approach can work for protostream marshallers or 
> any other code bits that the Cache does not depend on during startup, and 
> which can be deployed anytime later. In this category we currently have : 
> filters, converters. These are currently deployed with the help of a 
> DeploymentUnitProcessor, but we could have done it using a ServerTask as well.

^ I'm not sure we had ServerTasks in place when we had filters and 
converters... But if we had server tasks then, we should have used that path. 
My bad if we didn't do it :\

> Now that we took the route of DUP, I think we should continue in a consistent 
> manner and use it for other 'deployables' we identify from now on, ie. the 
> protobuf entity marshallers.

^ Having written DUPs, I consider them to be a royal PITA. So, I'm against that.

> As for encoders, lucene analyzers, compatibility marshaller, event marshaller 
> - these are all needed during cache startup. We need to come up with 
> something for these, so I propose to look them up using the 
> "moduleId:slot:className" convention.

As I see it, encoders/compatibility-marshaller/event-marshaller might not be 
needed on startup. If data is kept in binary and only deserialized lazily when 
needed, you only need them when you're going to do what you need...

To be more precise, yday Gustavo and I had a discussion and you'd need maybe 
pre-registering the encoders when a server tasks is deployed so that the server 
task, when executed in other nodes, can find the encoder. E.g. this is some 
pseudo code:

on ServerTask initialization:
Encoder encoder = new ProtoStreamEncoder(customMarshallers)
cache.registerEncoder("blah", encoder);

on ServerTask call:
cache.withEncoder("blah").values().stream()

^ Btw, I know that's not how withEncoder() works... This would be much flexible 
to use than forcing encoders to be defined on startup. The server task 
deployment could ship the marshallers for POJOs since it already has to ship 
the POJOs to be able to use them.

I'd like to hear which use cases would really require those components you 
mention to be available on startup... assuming we don't by default transform 
data into their deserialized format and we keep data in binary format instead...

Cheers,

> 
> 
> On 06/15/2017 03:40 PM, Galder Zamarreño wrote:
>> @Gustavo, some important info for your transcoding work below:
>> 
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>>> On 15 Jun 2017, at 11:05, Adrian Nistor  wrote:
>>> 
>>> Hi Galder,
>>> 
>>> this fix is acceptable for now as it quickly enables users to use 
>>> CompatibilityProtoStreamMarshaller (provided by Infinispan), but in the 
>>> long run we would want users to be able to specify a custom marshaller 
>>> class that comes from a user supplied module or even a deployment - the 
>>> general case.
>>> 
>>> With the introduction of encoders and deprecation of compat mode we still 
>>> have the same class loading issue in the general case. So I propose to 
>>> refine a bit our approach and instead of specifying just a class name we 
>>> should use a naming schema like "moduleId:slot:className", giving users the 
>>> ability to specify a class that comes from a different module or 
>>> deployment. I'm currently experimenting with this. I'll come back with 
>>> results soon.
>>> 
>>> There are also other code bits that need to be deployed in the server ASAP: 
>>> protostream entity marshallers, lucene analyzers. I'm thinking these could 
>>> all benefit from the same solution.
>> I was able to easily get around the issue of deploying protostream entity 
>> marshallers by simply adding a server tasks that did that:
>> 
>> https://github.com/infinispan-demos/swiss-transport-datagrid/blob/master/analytics/analytics-domain/src/main/java/delays/java/stream/proto/AddProtobufTask.java
>> 
>> In fact, that server tasks acts serves as a way to add domain pojos to the 
>> system... So when the server starts receiving data, it can deserialize it 
>> without problems.
>> 
>> However, there's a potential problem here you might want to consider in your 
>> work: If I deploy the add protobuf task, write data, then redeploy the add 
>> protobuf task, then retrieve some data, the system blows up because the 
>> classloader of the domain POJOs has changed. So you'd start se

[infinispan-dev] Important feedback for transcoding work - Re: Quick fix for ISPN-7710

2017-06-15 Thread Galder Zamarreño
@Gustavo, some important info for your transcoding work below:

--
Galder Zamarreño
Infinispan, Red Hat

> On 15 Jun 2017, at 11:05, Adrian Nistor  wrote:
> 
> Hi Galder,
> 
> this fix is acceptable for now as it quickly enables users to use 
> CompatibilityProtoStreamMarshaller (provided by Infinispan), but in the long 
> run we would want users to be able to specify a custom marshaller class that 
> comes from a user supplied module or even a deployment - the general case.
> 
> With the introduction of encoders and deprecation of compat mode we still 
> have the same class loading issue in the general case. So I propose to refine 
> a bit our approach and instead of specifying just a class name we should use 
> a naming schema like "moduleId:slot:className", giving users the ability to 
> specify a class that comes from a different module or deployment. I'm 
> currently experimenting with this. I'll come back with results soon.
> 
> There are also other code bits that need to be deployed in the server ASAP: 
> protostream entity marshallers, lucene analyzers. I'm thinking these could 
> all benefit from the same solution.

I was able to easily get around the issue of deploying protostream entity 
marshallers by simply adding a server tasks that did that:

https://github.com/infinispan-demos/swiss-transport-datagrid/blob/master/analytics/analytics-domain/src/main/java/delays/java/stream/proto/AddProtobufTask.java

In fact, that server tasks acts serves as a way to add domain pojos to the 
system... So when the server starts receiving data, it can deserialize it 
without problems.

However, there's a potential problem here you might want to consider in your 
work: If I deploy the add protobuf task, write data, then redeploy the add 
protobuf task, then retrieve some data, the system blows up because the 
classloader of the domain POJOs has changed. So you'd start seeing 
ClassCastException errors...

That's why I think that even though in the past we'd store objects in 
deserialized form, this could be problematic because you're committing to 
domain objects with a given classloader...

The more I think about it, the more I think we should keep data only in binary 
format in the server. IOW, we should never try to keep it in deserialized 
format. That way, no matter how many times the domain objects are redeployed, 
assuming no compile-binary changes, the lazy transcoding would work without 
problems.

> Btw, what is the relation between ISPN-7814 and ISPN-7710 ?

The relationship between them is explained here:

https://github.com/infinispan-demos/swiss-transport-datagrid#infinispan-server-docker-image

I would strongly recommend that you give that demo repository a try, you might 
get new ideas on the work you're doing.

Cheers,

> 
> Adrian
> 
> On 06/14/2017 06:35 PM, Galder Zamarreño wrote:
>> Hi all,
>> 
>> I'm seeing more and more people trying to do stuff like I did in [1] WRT to 
>> running server tasks in server.
>> 
>> One of the blockers is [2]. I know we have transcoding coming up but I 
>> wondered if we could implement the quick hack of referencing 
>> remote-query.server module from root org.infinispan module.
>> 
>> So, in essence, adding the following to org/infinispan/main/module.xml:
>> 
>>   
>> 
>> Once ISPN-7710 is in place, along with ISPN-7814, users can run the demos in 
>> [1] without a custom server build.
>> 
>> Cheers,
>> 
>> [1] https://github.com/infinispan-demos/swiss-transport-datagrid
>> [2] https://issues.jboss.org/browse/ISPN-7710
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Quick fix for ISPN-7710

2017-06-14 Thread Galder Zamarreño
Hi all,

I'm seeing more and more people trying to do stuff like I did in [1] WRT to 
running server tasks in server.

One of the blockers is [2]. I know we have transcoding coming up but I wondered 
if we could implement the quick hack of referencing remote-query.server module 
from root org.infinispan module.

So, in essence, adding the following to org/infinispan/main/module.xml:

  

Once ISPN-7710 is in place, along with ISPN-7814, users can run the demos in 
[1] without a custom server build.

Cheers,

[1] https://github.com/infinispan-demos/swiss-transport-datagrid
[2] https://issues.jboss.org/browse/ISPN-7710
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Why JCache embedded has core as provided dependency

2017-06-14 Thread Galder Zamarreño
I don't yet have an opinion on this dependency that depends on everything, but 
I've created these two JIRAs which most of us seem to agree on:

* Remove unnecessary provided dependencies which make depending on Infinispan 
harder. [1]

* Uber jars should not be pushed to Maven. Instead they should be treated just 
like other zip distros that are not pushed. [2]

I think we should fix [1] in 9.0.x and [2] can wait to 9.1.x.

Cheers,

[1] https://issues.jboss.org/browse/ISPN-7930
[2] https://issues.jboss.org/browse/ISPN-7931
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 Jun 2017, at 20:35, Sanne Grinovero  wrote:
> 
> On 9 June 2017 at 16:13, Alan Field  wrote:
>> I really like this idea. It is similar to one of the solutions hinted at in 
>> the Netty issue that Gustavo pointed at. [1] The suggestion there was to 
>> replace their current uber jar that contains all of the shaded dependencies 
>> with a JAR that just depended on everything. Then you would get the benefit 
>> of being able to use a single Maven dependency to pull in all of the JAR 
>> files without the issues that shading brings.
> 
> The proposal is to bundle APIs only, but we'll always have tons of
> optional (and provided) extension points, e.g. we can't include all
> CacheStore(s), not least as many live in separate repositories.
> 
> Similarly, many "experimental modules" might provide additional APIs
> which will need to be accessed like the current Query API: *decoupled*
> - at least until there's an agreement for inclusion in such an
> umbrella API.
> This is unavoidable as someone might start working on such a module
> without telling us.. incidentally to foster a community of extensions
> I'd rather have our own API showcase how it works.
> 
> Also some Java EE APIs - e.g. the JTA API - should never be included;
> that's "best practice" in community standards - ugly and we could
> complain that Maven could deal better with it, containers could also
> deal better with it, but that's how it is.
> 
> So it's an improvement in the API but it's not sorting out the
> problems for people not familiar with dependency management. And I
> believe that's ok! Just making sure we're agreeing on the goals and
> non-goals.
> 
> Considering it's a very minor improvement in usability I'm not sold
> about this direction: you'll have ONE MORE JAR and one more layer of
> silly indirection to maintain. You might state it's trivial but I
> don't believe that, as the definition of which components to include
> is necessarily going to be fuild, makign this "API" more prone to wild
> changes.
> 
> The one liner to get a reference to the SearchFactory is no big deal
> and could be solved better by having CDI and Spring extensions - while
> maintaining good decoupling - but if it's the path to convince you all
> to take out the uber jars then by all means do it right now ;)
> 
> Thanks,
> Sanne
> 
>> 
>> Thanks,
>> Alan
>> 
>> [1] https://github.com/netty/netty/issues/4671
>> 
>> - Original Message -
>>> From: "Radim Vansa" 
>>> To: infinispan-dev@lists.jboss.org
>>> Sent: Friday, June 9, 2017 3:29:44 AM
>>> Subject: Re: [infinispan-dev] Why JCache embedded has core as provided 
>>> dependency
>>> 
>>> Katia has recently pointed out some usability flaws, and we have
>>> discussed a central point class that would allow you to explore the API:
>>> instead of *knowing* about org.infinispan.query.Search or
>>> org.infinispan.counters. EmbeddedCounterManagerFactory you'd just call
>>> 
>>> Infinispan ispn = Infinispan.newInstance();
>>> ispn.search().someQueryMethod(...);
>>> ispn.counters().someCounterMethod(...);
>>> ispn.cacheManager().getCache(...);
>>> 
>>> An umbrella module that would contain this 'discovery API' would need
>>> all the dependencies, so that would be a perfect replacement for the
>>> embedded maven artifact. Shouldn't be that much of a work to hack this
>>> together - how do you think that should be called? infinispan-api (but
>>> it would be nicer to reserve this if we ever manage to create the
>>> 'public API' module, with interfaces only), infinispan-facade,
>>> infinispan-surface? We could even use infinispan-embedded, but that
>>> would cause some confusion if we distributed infinispan-embedded uberjar
>>> and infinispan-embedded umbrella artifact.
>>> 
>>> Radim
>>> 
>>> On 06/08/2017 08:04 PM, Alan Field

Re: [infinispan-dev] Why JCache embedded has core as provided dependency

2017-06-14 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 9 Jun 2017, at 08:37, Sebastian Laskawiec  wrote:
> 
> I agree with Alan here. Maven Central is a free "download area", so I 
> wouldn't give it up for free. BTW, what is the point of creating and not 
> shipping them?

I never said we wouldn't ship them. 

I'm happy with uber jars being produced but they should just not be available 
as a Maven dependency IMO.

So, you'd ship them just like the other non-Maven stuff we produce, direct 
downloads.

> 
> I would lean towards to removing them completely or limiting the number of 
> use cases to the minimum e.g. we shouldn't support using infinispan-embedded 
> and jcache; if jcache is essential it should be inside infinispan-embedded; 
> the same for Spring integration modules - either we should put them in uber 
> jars or say that you can use Spring integration with small jars.

How do you plan to stop that happening?

> 
> On Fri, Jun 9, 2017 at 5:05 AM Alan Field  wrote:
> Wasn't the ability to add a single dependency to a project to start using 
> Infinispan the whole purpose for the uber jars? I'm not trying to make an 
> argument for keeping them, because I know they have caused many issues. I 
> just think that if we are going to remove them from Maven, then there should 
> be a way to achieve the same easy developer on boarding that uber jars were 
> supposed to provide. Whether this is Maven project templates, or something 
> else doesn't matter.
> 
> Thanks,
> Alan
> 
> - Original Message -
> > From: "Tristan Tarrant" 
> > To: infinispan-dev@lists.jboss.org
> > Sent: Thursday, June 8, 2017 4:05:08 AM
> > Subject: Re: [infinispan-dev] Why JCache embedded has core as provided 
> > dependency
> >
> > I think we should turn off maven deployment for uber jars.
> >
> > Tristan
> >
> > On 6/7/17 5:10 PM, Gustavo Fernandes wrote:
> > > On Wed, Jun 7, 2017 at 11:02 AM, Galder Zamarreño  > > <mailto:gal...@redhat.com>> wrote:
> > >
> > > As far as I see it:
> > >
> > > * infinispan-embedded should never be a dependency in a Maven project.
> > >
> > > * No uber jars should really be used as Maven dependencies because
> > > all the exclusion that fine grained dependencies allow you to do
> > > goes out of the window when all classes are inside a jar. This is
> > > not just theory, I've personally had such issues.
> > >
> > > * Uber jars are designed for Ant or other build tool users that
> > > don't have a dependency resolution engine in place.
> > >
> > >     Cheers,
> > >
> > > p.s. I thought we had already discussed this before?
> > >
> > >
> > >
> > > I totally agree. In addition, uberjars should not be an osgi bundle or a
> > > jboss module, for similar reasons.
> > >
> > > P.S: Even Ant has a dependency mgmt available, which is Ivy.
> > >
> > > Cheers,
> > > Gustavo
> > >
> > > --
> > > Galder Zamarreño
> > > Infinispan, Red Hat
> > >
> > >  > On 7 Jun 2017, at 11:50, Sebastian Laskawiec  > > <mailto:slask...@redhat.com>> wrote:
> > >  >
> > >  > Hey,
> > >  >
> > >  > The change was introduced by this commit [1] and relates to this
> > > JIRAs [2][3]. The root cause is in [3].
> > >  >
> > >  > Imagine a scenario where you add JCache module to your together
> > > infinispan-embedded. If your classpath was constructed in such a way
> > > that infinispan-embedded was before infinispan-core (classpath is
> > > scanned from left to right in standalone apps), we could get a
> > > relocated (uber jars move some classes into other packages) logger.
> > > That caused class mismatch errors. It is worth to mention that it
> > > will happen to all relocated classes, logger was just an example.
> > > And we need to relocate them, since a user might want to use his
> > > own, newer version of DMR or any other library. So there's no
> > > perfect solution here.
> > >  >
> > >  > Now a lot of time passed since then and we changed quite a few
> > > things. So this topic probably needs to be revisited.
> > >  >
> > >  > So the first question that we should ask, shall we allow putting
> > > jcache and infinis

Re: [infinispan-dev] Moving functional API to core

2017-06-12 Thread Galder Zamarreño
Sounds good to me. 

Remember that functional API is marked as experimental, so it's fine to do 
things like this.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 9 Jun 2017, at 23:02, Radim Vansa  wrote:
> 
> Hi guys,
> 
> when the functional API has been outline, the interfaces were put into 
> infinispan-commons to make it possible to share these between remote 
> clients and embedded use case. However, it seems that reusing this as-is 
> impossible or at least impractical as we cannot send the lambdas in a 
> language neutral way. In the future, we may implement a way to share 
> functions between client and a server but that will most likely result 
> in an interface accepting something else than Function R>. Also, it's rather weird to have two EntryVersion interfaces.
> 
> Therefore I suggest moving org.infinispan.commons.api.functional to 
> infinispan-core, package org.infinispan.api.functional
> 
> You might say that the server-side code would use the interfaces, but 
> once it's running on server, it should depend on core (or core-api) - 
> commons is what is shared with the client, and if the client will in 
> future register a new function on the server, the user code should 
> depend on core-api as well (client-hotrod itself does not have to).
> 
> If you wonder what led me to this is that I've tried to add 
> SerializableFunction overloads to the FunctionalMap and found out that 
> SerializableFunction et all are only in infinispan-core (for good).
> 
> Please let me know if you have objections/if there something I have missed.
> 
> Radim
> 
> -- 
> Radim Vansa 
> JBoss Performance Team
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Why JCache embedded has core as provided dependency

2017-06-07 Thread Galder Zamarreño
What has changed is that Stéphane has made a very good point which I had not 
realised:

Making core provided dependency means a JCache dependency user needs to add 
core dependency on top of that, which reduces usability. Core jar is not a 
provided dependency of JCache, it's a normal dependency. I don't think provided 
dependencies should be used to get around uber jar dependency issues.

IMO, the bigger usability issue is the fact that uber jars are available as 
Maven dependencies. Uber jars should just not be distributed as Maven 
dependencies. They should just be put somewhere else but not in Maven. That'd 
way we'd avoid the problem in the first place.

In the mean time, I think we should:

* Move back to having normal dependencies for core in JCache (and Spring too, 
if it applies)
* Go through our examples and avoid using uber jar dependencies.

Then, explore the idea above of not having uber jars as Maven dependencies.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 7 Jun 2017, at 12:23, Sebastian Laskawiec  wrote:
> 
> We discussed it number of times, including (but probably not limited to):
>   • 
> http://lists.jboss.org/pipermail/infinispan-dev/2016-February/016414.html
>   • http://lists.jboss.org/pipermail/infinispan-dev/2016-March/016490.html
> You might also want to look into the internal lists...
> 
> The biggest question - has anything changed? Do we have any other idea? 
> 
> On Wed, Jun 7, 2017 at 12:05 PM Galder Zamarreño  wrote:
> Moreover:
> 
> * The experience of Maven users should never be penalised by uber jars. Uber 
> jar users should be a minority compared with Maven/Gradle...etc users that 
> have dependency engines in place to select which components they want to 
> depend on.
> 
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
> > On 7 Jun 2017, at 12:02, Galder Zamarreño  wrote:
> >
> > As far as I see it:
> >
> > * infinispan-embedded should never be a dependency in a Maven project.
> >
> > * No uber jars should really be used as Maven dependencies because all the 
> > exclusion that fine grained dependencies allow you to do goes out of the 
> > window when all classes are inside a jar. This is not just theory, I've 
> > personally had such issues.
> >
> > * Uber jars are designed for Ant or other build tool users that don't have 
> > a dependency resolution engine in place.
> >
> > Cheers,
> >
> > p.s. I thought we had already discussed this before?
> > --
> > Galder Zamarreño
> > Infinispan, Red Hat
> >
> >> On 7 Jun 2017, at 11:50, Sebastian Laskawiec  wrote:
> >>
> >> Hey,
> >>
> >> The change was introduced by this commit [1] and relates to this JIRAs 
> >> [2][3]. The root cause is in [3].
> >>
> >> Imagine a scenario where you add JCache module to your together 
> >> infinispan-embedded. If your classpath was constructed in such a way that 
> >> infinispan-embedded was before infinispan-core (classpath is scanned from 
> >> left to right in standalone apps), we could get a relocated (uber jars 
> >> move some classes into other packages) logger. That caused class mismatch 
> >> errors. It is worth to mention that it will happen to all relocated 
> >> classes, logger was just an example. And we need to relocate them, since a 
> >> user might want to use his own, newer version of DMR or any other library. 
> >> So there's no perfect solution here.
> >>
> >> Now a lot of time passed since then and we changed quite a few things. So 
> >> this topic probably needs to be revisited.
> >>
> >> So the first question that we should ask, shall we allow putting jcache 
> >> and infinispan-embedded together on the classpath. If the answer is yes, I 
> >> believe it should stay as it is (since the user always have a choice 
> >> whether he wants to use jcache with or without uber jar). The same 
> >> question needs to be asked for Spring modules as well as all cache stores. 
> >> The behavior needs to be consistent across all those modules.
> >>
> >> If the answer is no (which is also valid because jcache is already present 
> >> in embedded uber jar), we should migrate all JBoss Logging references to 
> >> Infinispan Common Logging (as Tristan did here [4]) and we can make 
> >> infinispan-core as a compile time dependency to jcache. Even though 
> >> migrating to Infinispan logger is not necessary, this way we won't break 
> >> users app which used infinispan-embedded + jcache approach. Of course the 
> >> same a

Re: [infinispan-dev] Why JCache embedded has core as provided dependency

2017-06-07 Thread Galder Zamarreño
Moreover:

* The experience of Maven users should never be penalised by uber jars. Uber 
jar users should be a minority compared with Maven/Gradle...etc users that have 
dependency engines in place to select which components they want to depend on.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 7 Jun 2017, at 12:02, Galder Zamarreño  wrote:
> 
> As far as I see it:
> 
> * infinispan-embedded should never be a dependency in a Maven project.
> 
> * No uber jars should really be used as Maven dependencies because all the 
> exclusion that fine grained dependencies allow you to do goes out of the 
> window when all classes are inside a jar. This is not just theory, I've 
> personally had such issues.
> 
> * Uber jars are designed for Ant or other build tool users that don't have a 
> dependency resolution engine in place.
> 
> Cheers,
> 
> p.s. I thought we had already discussed this before?
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
>> On 7 Jun 2017, at 11:50, Sebastian Laskawiec  wrote:
>> 
>> Hey,
>> 
>> The change was introduced by this commit [1] and relates to this JIRAs 
>> [2][3]. The root cause is in [3].
>> 
>> Imagine a scenario where you add JCache module to your together 
>> infinispan-embedded. If your classpath was constructed in such a way that 
>> infinispan-embedded was before infinispan-core (classpath is scanned from 
>> left to right in standalone apps), we could get a relocated (uber jars move 
>> some classes into other packages) logger. That caused class mismatch errors. 
>> It is worth to mention that it will happen to all relocated classes, logger 
>> was just an example. And we need to relocate them, since a user might want 
>> to use his own, newer version of DMR or any other library. So there's no 
>> perfect solution here.
>> 
>> Now a lot of time passed since then and we changed quite a few things. So 
>> this topic probably needs to be revisited. 
>> 
>> So the first question that we should ask, shall we allow putting jcache and 
>> infinispan-embedded together on the classpath. If the answer is yes, I 
>> believe it should stay as it is (since the user always have a choice whether 
>> he wants to use jcache with or without uber jar). The same question needs to 
>> be asked for Spring modules as well as all cache stores. The behavior needs 
>> to be consistent across all those modules.
>> 
>> If the answer is no (which is also valid because jcache is already present 
>> in embedded uber jar), we should migrate all JBoss Logging references to 
>> Infinispan Common Logging (as Tristan did here [4]) and we can make 
>> infinispan-core as a compile time dependency to jcache. Even though 
>> migrating to Infinispan logger is not necessary, this way we won't break 
>> users app which used infinispan-embedded + jcache approach. Of course the 
>> same applies to Spring and Cache stores modules.
>> 
>> I think the latter approach deserves some exploration. I would vote for 
>> moving that way.
>> 
>> Thanks,
>> Sebastian
>> 
>> [1] 
>> https://github.com/infinispan/infinispan/commit/720f158cce38d86b292e1ce77b75509342007739
>> [2] https://issues.jboss.org/browse/ISPN-6295
>> [3] https://issues.jboss.org/browse/ISPN-6132
>> [4] https://github.com/infinispan/infinispan/pull/4140/files
>> 
>> 
>> On Wed, Jun 7, 2017 at 11:19 AM Galder Zamarreño  wrote:
>> Hi all,
>> 
>> Re: 
>> https://github.com/spring-projects/spring-boot/pull/9417#discussion_r120375579
>> 
>> Stéphane makes a good point there, why did we make core provided dependency? 
>> It does feel a bit of a pain that anyone that depends on jcache embedded 
>> also needs to depend on core.
>> 
>> Any more details behind this decision?
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>> -- 
>> SEBASTIAN ŁASKAWIEC
>> INFINISPAN DEVELOPER
>> Red Hat EMEA
>> 
> 


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Why JCache embedded has core as provided dependency

2017-06-07 Thread Galder Zamarreño
As far as I see it:

* infinispan-embedded should never be a dependency in a Maven project.

* No uber jars should really be used as Maven dependencies because all the 
exclusion that fine grained dependencies allow you to do goes out of the window 
when all classes are inside a jar. This is not just theory, I've personally had 
such issues.

* Uber jars are designed for Ant or other build tool users that don't have a 
dependency resolution engine in place.

Cheers,

p.s. I thought we had already discussed this before?
--
Galder Zamarreño
Infinispan, Red Hat

> On 7 Jun 2017, at 11:50, Sebastian Laskawiec  wrote:
> 
> Hey,
> 
> The change was introduced by this commit [1] and relates to this JIRAs 
> [2][3]. The root cause is in [3].
> 
> Imagine a scenario where you add JCache module to your together 
> infinispan-embedded. If your classpath was constructed in such a way that 
> infinispan-embedded was before infinispan-core (classpath is scanned from 
> left to right in standalone apps), we could get a relocated (uber jars move 
> some classes into other packages) logger. That caused class mismatch errors. 
> It is worth to mention that it will happen to all relocated classes, logger 
> was just an example. And we need to relocate them, since a user might want to 
> use his own, newer version of DMR or any other library. So there's no perfect 
> solution here.
> 
> Now a lot of time passed since then and we changed quite a few things. So 
> this topic probably needs to be revisited. 
> 
> So the first question that we should ask, shall we allow putting jcache and 
> infinispan-embedded together on the classpath. If the answer is yes, I 
> believe it should stay as it is (since the user always have a choice whether 
> he wants to use jcache with or without uber jar). The same question needs to 
> be asked for Spring modules as well as all cache stores. The behavior needs 
> to be consistent across all those modules.
> 
> If the answer is no (which is also valid because jcache is already present in 
> embedded uber jar), we should migrate all JBoss Logging references to 
> Infinispan Common Logging (as Tristan did here [4]) and we can make 
> infinispan-core as a compile time dependency to jcache. Even though migrating 
> to Infinispan logger is not necessary, this way we won't break users app 
> which used infinispan-embedded + jcache approach. Of course the same applies 
> to Spring and Cache stores modules.
> 
> I think the latter approach deserves some exploration. I would vote for 
> moving that way.
> 
> Thanks,
> Sebastian
> 
> [1] 
> https://github.com/infinispan/infinispan/commit/720f158cce38d86b292e1ce77b75509342007739
> [2] https://issues.jboss.org/browse/ISPN-6295
> [3] https://issues.jboss.org/browse/ISPN-6132
> [4] https://github.com/infinispan/infinispan/pull/4140/files
> 
> 
> On Wed, Jun 7, 2017 at 11:19 AM Galder Zamarreño  wrote:
> Hi all,
> 
> Re: 
> https://github.com/spring-projects/spring-boot/pull/9417#discussion_r120375579
> 
> Stéphane makes a good point there, why did we make core provided dependency? 
> It does feel a bit of a pain that anyone that depends on jcache embedded also 
> needs to depend on core.
> 
> Any more details behind this decision?
> 
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
> -- 
> SEBASTIAN ŁASKAWIEC
> INFINISPAN DEVELOPER
> Red Hat EMEA
> 


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Why JCache embedded has core as provided dependency

2017-06-07 Thread Galder Zamarreño
Hi all,

Re: 
https://github.com/spring-projects/spring-boot/pull/9417#discussion_r120375579

Stéphane makes a good point there, why did we make core provided dependency? It 
does feel a bit of a pain that anyone that depends on jcache embedded also 
needs to depend on core.

Any more details behind this decision?

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Proposal for moving Hibernate 2l provider to Infinispan

2017-06-02 Thread Galder Zamarreño
I think it's going through, we've approved you in the past.

Replies below:

> On 31 May 2017, at 17:02, Steve Ebersole  wrote:
> 
> Just a heads up - FWIW I doubt my reply goes through to the entire 
> infinispan-dev list.
> 
> Replies inline...
> 
> 
> 3. What should be the artifact name? Should it be 'hibernate-infinispan' like 
> it is today? The difference with the existing cache provider would be the 
> groupId. Or some other artifact id?
> 
> Since you use Maven (IIUC) you could just publish a relocation...

Oh, didn't know about that. Yeah, I think we'd do that:
https://maven.apache.org/guides/mini/guide-relocation.html

>  
> 
> 4. Should the main artifact contain the hibernate major version it belongs 
> to? E.g. assuming we take 'hibernate-infinispan', should it be like that, or 
> should it instead be 'hibernate5-infinispan'? This is where it'd be 
> interesting to hear about our past Lucene directory or Query integration 
> experience.
> 
> Probably, but (no promises) one thing I wanted to look at since Hibernate 
> baselines on Java 8, is to maintain the existing SPI using default methods as 
> a bridge.  But failing that, I think your suggestion is the best option.
> 
>  
> 5. A thing to consider also is whether to maintain same package naming. We're 
> currently using 'org.hibernate.cache.infinispan.*'. From a compatibility 
> sense, it'd help to keep same package since users reference region factory 
> fully qualified class names. We'd also continue to be sole owners of 
> 'org.hibernate.cache.infinispan.*'. However, I dunno whether having 
> 'org.hibernate...' package name within Infinispan repo would create other 
> issues?
> 
> FWIW Hibernate offers "short naming" or "friendly naming" for many 
> configurable settings, cache providers being one.  For hibernate-infinispan 
> we register 2: "infinispan" and "infinispan-jndi".  You can see this in 
> org.hibernate.cache.infinispan.StrategyRegistrationProviderImpl.  That 
> approach will continue to work when you move it.  The point being that users 
> do not specify the class name in config, they'd just specify "infinispan", 
> "infinispan-jndi", etc.

Ah good to know, I wasn't aware of it. I'll look into that.

> 6. Testing wise, the cache provider is currently tested one test at the time, 
> using JUnit. The testsuite already runs fast enough and I'd prefer not to 
> change anything in this area right now. Is that Ok? Or is there any desire to 
> move it to TestNG?
> 
> Hmmm, that is actually surprising... I thought the hibernate-infinispan  
> provider tests were still disabled as they had routinely caused intermittent 
> failures of the build.  I guess this was rectified?

They seem pretty stable to me when I run them locally. 





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

2017-05-31 Thread Galder Zamarreño
Cool down peoples! 

http://www.quickmeme.com/meme/35ovcy

Sebastian, don't think Sanne was being rude, he's just blunt and we need his 
bluntness :)

Sanne, be nice to Sebastian and get him a beer next time around ;)

Peace out! :)
--
Galder Zamarreño
Infinispan, Red Hat

> On 31 May 2017, at 09:38, Sebastian Laskawiec  wrote:
> 
> Hey Sanne,
> 
> Comments inlined.
> 
> Thanks,
> Sebastian
> 
> On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero  wrote:
> Hi Sebastian,
> 
> the "intelligent routing" of Hot Rod being one of - if not the main - reason 
> to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with 
> HTTP (REST) in such architectures.
> 
> Several people have suggested in the past the need to have an HTTP smart load 
> balancer which would be able to route the external REST requests to the right 
> node. Essentially have people use REST over the wider network, up to reaching 
> the Infinispan cluster where the service endpoint (the load balancer) can 
> convert them to optimised Hot Rod calls, or just leave them in the same 
> format but routing them with the same intelligence to the right nodes.
> 
> I realise my proposal requires some work on several fronts, at very least we 
> would need:
>  - feature parity Hot Rod / REST so that people can actually use it
>  - a REST load balancer
> 
> But I think the output of such a direction would be far more reusable, as 
> both these points are high on the wish list anyway.
> 
> Unfortunately I'm not convinced into this idea. Let me elaborate...
> 
> It goes without saying that HTTP payload is simply larger and require much 
> more processing. That alone makes it slower than Hot Rod (I believe Martin 
> could provide you some numbers on that). The second arguments is that 
> switching/routing inside Kubernetes is bloody fast (since it's based on 
> iptables) and some cloud vendors optimize it even further (e.g. Google 
> Andromeda [1][2], I would be surprised if AWS didn't have anything similar). 
> During the work on this prototype I wrote a simple async binary proxy [3] and 
> measured GCP load balancer vs my proxy performance. They were twice as fast 
> [4][5]. You may argue whether I could write a better proxy. Probably I could, 
> but the bottom line is that another performance hit is inevitable. They are 
> really fast and they operate on their own infrastructure (load balancers is 
> something that is provided by the cloud vendor to Kubernetes, not the other 
> way around).
> 
> So with all that in mind, are we going to get better results comparing to my 
> proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes 
> really soon I hope). The second question is whether this new "REST load 
> balancer" will work better than a standard load balancer using round robin 
> strategy? Again I dare to doubt, even if you you're faster at routing request 
> to proper node, you introduce another layer of latency.
> 
> Of course the priority of this is up to Tristan but I definitely wouldn't 
> place it high on todo list. And before even looking at it I would recommend 
> taking Netty HTTP Proxy, putting it in the middle between real load balancer 
> and Infinispan app and measure performance with and without it. Another test 
> could be with 1 and 10 replicas to check the performance penalty of hitting 
> 100% and 10% requests into proper node.
> 
> [1] 
> https://cloudplatform.googleblog.com/2014/08/containers-vms-kubernetes-and-vmware.html
> [2] 
> https://cloudplatform.googleblog.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html
> [3] 
> https://github.com/slaskawi/external-ip-proxy/blob/Benchmark_with_proxy/Proxy/Proxy.go
> [4] 
> https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20proxy.txt
> [5] 
> https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20loadbalancer.txt
>  
> Not least having a "REST load balancer" would allow to deploy Infinispan as 
> an HTTP cache; just honouring the HTTP caching protocols and existing 
> standards would allow people to use any client to their liking,
> 
> Could you please give me an example how this could work? The only way that I 
> know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis 
> for example [6].
> 
> [6] https://www.nginx.com/resources/wiki/modules/redis/
>  
> without us having to maintain Hot Rod clients and support it on many exotic 
> platforms - we would still have Hot Rod clients but we'd be able to pick a 
> smaller set of strategical platforms (e.g. Windows doesn't have to be in that 
> list).
> 
> As I me

[infinispan-dev] Proposal for moving Hibernate 2l provider to Infinispan

2017-05-31 Thread Galder Zamarreño
Hi all,

Given all the previous discussions we've had on this list [1] [2], seems like 
there's a majority of opinions towards moving Infinispan Hibernate 2LC cache 
provider to the Infinispan repo.

Although we could put it in a completely separate repo, given its importance, I 
think we should keep it in the main Infinispan repo.

With this in mind, I wanted to propose the following:

1. Move the code Hibernate repository and bring it to Infinispan master and 
9.0.x branches. We'd need to introduce the module in the 9.0.x branch so that 
9.0.x users are not left out.

2. Create a root directory called `hibernate-orm` within Infinispan main repo. 
Within it, we'd keep 1 or more cache provider modules based on major Hibernate 
versions.

3. What should be the artifact name? Should it be 'hibernate-infinispan' like 
it is today? The difference with the existing cache provider would be the 
groupId. Or some other artifact id?

4. Should the main artifact contain the hibernate major version it belongs to? 
E.g. assuming we take 'hibernate-infinispan', should it be like that, or should 
it instead be 'hibernate5-infinispan'? This is where it'd be interesting to 
hear about our past Lucene directory or Query integration experience.

5. A thing to consider also is whether to maintain same package naming. We're 
currently using 'org.hibernate.cache.infinispan.*'. From a compatibility sense, 
it'd help to keep same package since users reference region factory fully 
qualified class names. We'd also continue to be sole owners of 
'org.hibernate.cache.infinispan.*'. However, I dunno whether having 
'org.hibernate...' package name within Infinispan repo would create other 
issues?

6. Testing wise, the cache provider is currently tested one test at the time, 
using JUnit. The testsuite already runs fast enough and I'd prefer not to 
change anything in this area right now. Is that Ok? Or is there any desire to 
move it to TestNG?

Thoughts? Am I forgetting something?

Cheers,

[1] http://lists.jboss.org/pipermail/infinispan-dev/2017-February/017173.html
[2] http://lists.jboss.org/pipermail/infinispan-dev/2017-May/017546.html
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] IRC chat: HB + I9

2017-05-30 Thread Galder Zamarreño
Thx Steve for your input.

Seems like everyone agrees moving to Infinispan might be best option, so I'll 
be sending a proposal to the list in the next few days.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 25 May 2017, at 15:31, Steve Ebersole  wrote:
> 
> A lot to read through here so I apologize up front if I missed something...
> 
> So to be fair I am biased as I would really like to not have to deal with 
> these integrations :)  That said, I do really believe that the best option is 
> to move this code out of the hibernate/hibernate-orm repo.  To me that could 
> mean a separate repo altogether (infinispan/infinispan-hibernate-l2c, or sim) 
> or into infinispan proper if Infinispan already has Hibernate dependency as 
> Sanne mentioned somewhere.
> 
> As far as Hibernate..  master is in fact 5.2, 6.0 exists just in my fork atm 
> - we are still discussing the exact event that should trigger moving that 6.0 
> branch up stream.  6.0 timeline is still basically unknown especially as far 
> as a Final goes. 
> 
> 
> On Wed, May 24, 2017, 11:04 AM Galder Zamarreño  wrote:
> Adding Steve,
> 
> Scott Marlow just reminded me that you've advocated for Infinispan 2LC 
> provider to be moved to Infinispan source tree [2].
> 
> So, you might want to add your thoughts to this thread?
> 
> Cheers,
> 
> [2] 
> http://transcripts.jboss.org/channel/irc.freenode.org/%23hibernate-dev/2015/%23hibernate-dev.2015-08-06.log.html
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
> > On 24 May 2017, at 17:56, Paul Ferraro  wrote:
> >
> > Option #4 would be my preference as well.  The integration into WF has
> > become increasingly cumbersome as the pace of Infinispan releases (and
> > associated API changes) has increased.  I would really rather avoid
> > having to create and maintain forks of hibernate-infinispan to support
> > the combination of Hibernate and Infinispan that don't exist in the
> > upstream codebase.
> >
> > On Wed, May 24, 2017 at 11:18 AM, Sanne Grinovero  
> > wrote:
> >> I would suggest option 4# : move the 2LC implementation to Infinispan.
> >>
> >> I already suggested this in the past, but to remind the main arguments I 
> >> have:
> >>
> >> - neither repository is ideal, but having it here vs there is not
> >> just moving the problem as the two projects are different, have
> >> different timelines and different backwards compatibility policies.
> >>
> >> - Infinispan already depends on several Hibernate projects - even
> >> directly to Hibernate ORM itself via the JPA cachestore and indirectly
> >> via Hibernate Search and WildFly - so moving the Infinispan dependency
> >> out of the Hibernate repository helps to linearize the build for one
> >> consistent stack.
> >> For example right now WildFly master contains a combination of
> >> Hibernate ORM and Infinispan 2LC, which is not the same combination as
> >> tested by running the 2LC testsuite; this happens all the time and
> >> brings its own set of issues & delays.
> >>
> >> - Infinispan changes way more often - and as Radim already suggested
> >> in his previous email - there's more benefit in having such advanced
> >> code more closely tied to Infinispan so that it can benefit from new
> >> capabilities even though these might not be ready to be blessed as
> >> long term API. The 2LC SPI in Hibernate on the other hand is stable,
> >> and has to stay stable anyway, for other reasons not least integration
> >> with other providers, so there's no symmetric benefit in having this
> >> code in Hibernate.
> >>
> >> - Infinispan releases breaking changes with a more aggressive pace.
> >> It's more useful for Infinispan 9 to be able to support older versions
> >> of Hibernate ORM, than the drawback of a new ORM release not having
> >> yet an Infinispan release compatible. This last point is the only
> >> drawback I can see, and franckly it's both a temporary situation as
> >> Infinispan can catch up quickly and a very inlikely situation as
> >> Hibernate ORM is unlikely to change these SPIs in e.g. the next major
> >> release 6.0.
> >>
> >> - Infinispan occasionally breaks expectations of the 2LC code, as
> >> Galder just had to figure out with a painful upgrade. We can all agree
> >> that these changes are necessary, but I strongly believe it's useful
> >> to *know* about such breackages ASAP from the testsuite, not half a
> >> year later when a major dependency up

[infinispan-dev] Weekly IRC Meeting Logs 2017-05-29

2017-05-29 Thread Galder Zamarreño
Hi all,

The logs for this week's meeting:
http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-05-29-14.02.log.html

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] IRC chat: HB + I9

2017-05-24 Thread Galder Zamarreño
Adding Steve,

Scott Marlow just reminded me that you've advocated for Infinispan 2LC provider 
to be moved to Infinispan source tree [2].

So, you might want to add your thoughts to this thread?

Cheers,

[2] 
http://transcripts.jboss.org/channel/irc.freenode.org/%23hibernate-dev/2015/%23hibernate-dev.2015-08-06.log.html
--
Galder Zamarreño
Infinispan, Red Hat

> On 24 May 2017, at 17:56, Paul Ferraro  wrote:
> 
> Option #4 would be my preference as well.  The integration into WF has
> become increasingly cumbersome as the pace of Infinispan releases (and
> associated API changes) has increased.  I would really rather avoid
> having to create and maintain forks of hibernate-infinispan to support
> the combination of Hibernate and Infinispan that don't exist in the
> upstream codebase.
> 
> On Wed, May 24, 2017 at 11:18 AM, Sanne Grinovero  
> wrote:
>> I would suggest option 4# : move the 2LC implementation to Infinispan.
>> 
>> I already suggested this in the past, but to remind the main arguments I 
>> have:
>> 
>> - neither repository is ideal, but having it here vs there is not
>> just moving the problem as the two projects are different, have
>> different timelines and different backwards compatibility policies.
>> 
>> - Infinispan already depends on several Hibernate projects - even
>> directly to Hibernate ORM itself via the JPA cachestore and indirectly
>> via Hibernate Search and WildFly - so moving the Infinispan dependency
>> out of the Hibernate repository helps to linearize the build for one
>> consistent stack.
>> For example right now WildFly master contains a combination of
>> Hibernate ORM and Infinispan 2LC, which is not the same combination as
>> tested by running the 2LC testsuite; this happens all the time and
>> brings its own set of issues & delays.
>> 
>> - Infinispan changes way more often - and as Radim already suggested
>> in his previous email - there's more benefit in having such advanced
>> code more closely tied to Infinispan so that it can benefit from new
>> capabilities even though these might not be ready to be blessed as
>> long term API. The 2LC SPI in Hibernate on the other hand is stable,
>> and has to stay stable anyway, for other reasons not least integration
>> with other providers, so there's no symmetric benefit in having this
>> code in Hibernate.
>> 
>> - Infinispan releases breaking changes with a more aggressive pace.
>> It's more useful for Infinispan 9 to be able to support older versions
>> of Hibernate ORM, than the drawback of a new ORM release not having
>> yet an Infinispan release compatible. This last point is the only
>> drawback I can see, and franckly it's both a temporary situation as
>> Infinispan can catch up quickly and a very inlikely situation as
>> Hibernate ORM is unlikely to change these SPIs in e.g. the next major
>> release 6.0.
>> 
>> - Infinispan occasionally breaks expectations of the 2LC code, as
>> Galder just had to figure out with a painful upgrade. We can all agree
>> that these changes are necessary, but I strongly believe it's useful
>> to *know* about such breackages ASAP from the testsuite, not half a
>> year later when a major dependency upgrade propagates to other
>> projects.
>> 
>> - The Hibernate ORM would appreciate getting rid of debugging
>> clustering and networking issues when there's the occasional failure,
>> which are stressful as they are out of their area of expertise.
>> 
>> I hope that makes sense?
>> 
>> Thanks,
>> Sanne
>> 
>> 
>> 
>> On 24 May 2017 at 08:49, Radim Vansa  wrote:
>>> Hi Galder,
>>> 
>>> I think that (3) is simply not possible (from non-technical perspective)
>>> and I don't think we have the manpower to maintain 2 different modules
>>> (2). The current version does not seem ready (generic enough) to get
>>> into Infinispan, so either (1), or a lot of more work towards (4) (which
>>> would be my preference).
>>> 
>>> I haven't thought about all the steps for (4), but it seems that
>>> UnorderedDistributionInterceptor and LockingInterceptor should get into
>>> Infinispan as a flavour of repl/dist cache mode that applies update in
>>> parallel on all owners without any ordering; it's up to the user to
>>> guarantee that changes to an entry are commutative.
>>> 
>>> The 2LC code itself shouldn't use the
>>> TombstoneCallInterceptor/VersionedCallInterceptor now that there is the
>>> functional API, you should move th

Re: [infinispan-dev] IRC chat: HB + I9

2017-05-23 Thread Galder Zamarreño
One final thing, [1] requires ISPN-7853 fix, which will be part of 9.0.1.

I know the branch currently points to 9.1.0-SNAPSHOT. That was just simply cos 
I tested out the fix in master first.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 23 May 2017, at 15:07, Galder Zamarreño  wrote:
> 
> Hi all,
> 
> I've just finished integrating Infinispan with a HB 6.x branch Steve had, all 
> tests pass now [1].
> 
> Yeah, we didn't commit on the final location for these changes. 
> 
> As far as I know, Hibernate master is not 6.x, but rather 5.2.x. There's no 
> 5.2.x branch in Hibernate main repo. 6.x is just a branch that Steve has.
> 
> These are the options availble to us:
> 
> 1. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 6.x 
> branch.
> 
> 2. Integrate 9.x provider as part of a second Infinispan module in Hibernate 
> 5.x branch.
> 
> 3. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 5.x 
> branch. This is problematic for since the provider is not backwards 
> compatible.
> 
> 4. Integrate 9.x provider in infinispan and deliver it as part of Infinispan 
> rather than Hibernate.
> 
> I'm not sure which one I prefer the most TBH... 1. is the ideal solution but 
> doesn't seem there will be a Hibernate 6.x release for a while. 2./3./4. all 
> have their downsides... :\
> 
> Thoughts?
> 
> [1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
>> On 16 May 2017, at 17:06, Paul Ferraro  wrote:
>> 
>> Thanks Galder.  I read through the infinispan-dev thread on the
>> subject, but I'm not sure what was concluded regarding the eventual
>> home for this code.
>> Once the testsuite passes, is the plan to commit to hibernate master?
>> If so, I will likely fork
>>  these changes into a WF module (and adapt it
>> for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9
>> until Hibernate6 is integrated.
>> 
>> Radim - one thing you mentioned on that infinispan-dev thread puzzled
>> me: you said that invalidation mode offers no benefits over
>> replication.  How is that possible?  Can you elaborate?
>> 
>> Paul
>> 
>> On Tue, May 16, 2017 at 9:03 AM, Galder Zamarreño  wrote:
>>> I'm on the move, not sure if Paul/Radim saw my replies:
>>> 
>>>  galderz, rvansa: Hey guys - is there a plan for Hibernate &
>>>   ISPN 9?
>>>  pferraro: Galder has been working on that
>>>  pferraro: though I haven't seen any results but a list of
>>>   stuff that needs to be changed
>>>  galderz: which Hibernate branch are you targeting?
>>>  pferraro: 5.2, but there are minute differences between 5.x
>>>   in terms of the parts that need love to get Infinispan 9 support
>>> *** Mode change: +v vblagoje on #infinispan by ChanServ
>>>   (ChanServ@services.)
>>>  rvansa: are you suggesting that 5.0 or 5.1 branches will be
>>>   adapted to additionally support infinispan 9?  how is that
>>>   possible?
>>>> pferraro: i'm working on it as we speak...
>>>> pferraro: down to 16 failuresd
>>>> pferraro: i started a couple of months ago, but had talks/demos to
>>>   prepare
>>>> pferraro: i've got back to working on it this week
>>> ...
>>>> pferraro: rvansa
>>>> rvansa: minute differences my ass ;p
>>>> pferraro: did you see my replies?
>>>> i got disconnected while replying...
>>>  hmm - no - I didn't
>>>  galderz: ^
>>>> pferraro: so, working on the HB + I9 integration as we speak
>>>> pferraro: i started a couple of months back but had talks/demos to
>>>   prepare and had to put that aside
>>>> pferraro: i'm down to 16 failures
>>>> pferraro: serious refactoring required of the integration to get it
>>>   to compile and the tests to pass
>>>> pferraro: need to switch to async interceptor stack in 2lc
>>>   integration and get all the subtle changes right
>>>> pferraro: it's a painstaking job basically
>>>> pferraro: i'm working on
>>>   https://github.com/galderz/hibernate-orm/tree/t_i9x_v2
>>>> pferraro: i can't remember where i branched off, but it's a branch
>>>   that steve had since master was focused on 5.x
>>>> pferraro: i've no idea when/where we'll integrate this, but one
>>>   thing is for sure: it's nowhere near backwards compatible
>>>> actually, fixed one this morning, so down to 15 failures
>>>> pferraro: any suggestions/wishes?
>>>> is anyone out there? ;)
>>> 
>>> Cheers,
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>> 
> 


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] IRC chat: HB + I9

2017-05-23 Thread Galder Zamarreño
Hi all,

I've just finished integrating Infinispan with a HB 6.x branch Steve had, all 
tests pass now [1].

Yeah, we didn't commit on the final location for these changes. 

As far as I know, Hibernate master is not 6.x, but rather 5.2.x. There's no 
5.2.x branch in Hibernate main repo. 6.x is just a branch that Steve has.

These are the options availble to us:

1. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 6.x 
branch.

2. Integrate 9.x provider as part of a second Infinispan module in Hibernate 
5.x branch.

3. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 5.x 
branch. This is problematic for since the provider is not backwards compatible.

4. Integrate 9.x provider in infinispan and deliver it as part of Infinispan 
rather than Hibernate.

I'm not sure which one I prefer the most TBH... 1. is the ideal solution but 
doesn't seem there will be a Hibernate 6.x release for a while. 2./3./4. all 
have their downsides... :\

Thoughts?

[1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2
--
Galder Zamarreño
Infinispan, Red Hat

> On 16 May 2017, at 17:06, Paul Ferraro  wrote:
> 
> Thanks Galder.  I read through the infinispan-dev thread on the
> subject, but I'm not sure what was concluded regarding the eventual
> home for this code.
> Once the testsuite passes, is the plan to commit to hibernate master?
> If so, I will likely fork 
>  these changes into a WF module (and adapt it
> for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9
> until Hibernate6 is integrated.
> 
> Radim - one thing you mentioned on that infinispan-dev thread puzzled
> me: you said that invalidation mode offers no benefits over
> replication.  How is that possible?  Can you elaborate?
> 
> Paul
> 
> On Tue, May 16, 2017 at 9:03 AM, Galder Zamarreño  wrote:
>> I'm on the move, not sure if Paul/Radim saw my replies:
>> 
>>  galderz, rvansa: Hey guys - is there a plan for Hibernate &
>>ISPN 9?
>>  pferraro: Galder has been working on that
>>  pferraro: though I haven't seen any results but a list of
>>stuff that needs to be changed
>>  galderz: which Hibernate branch are you targeting?
>>  pferraro: 5.2, but there are minute differences between 5.x
>>in terms of the parts that need love to get Infinispan 9 support
>> *** Mode change: +v vblagoje on #infinispan by ChanServ
>>(ChanServ@services.)
>>  rvansa: are you suggesting that 5.0 or 5.1 branches will be
>>adapted to additionally support infinispan 9?  how is that
>>possible?
>>> pferraro: i'm working on it as we speak...
>>> pferraro: down to 16 failuresd
>>> pferraro: i started a couple of months ago, but had talks/demos to
>>prepare
>>> pferraro: i've got back to working on it this week
>> ...
>>> pferraro: rvansa
>>> rvansa: minute differences my ass ;p
>>> pferraro: did you see my replies?
>>> i got disconnected while replying...
>>  hmm - no - I didn't
>>  galderz: ^
>>> pferraro: so, working on the HB + I9 integration as we speak
>>> pferraro: i started a couple of months back but had talks/demos to
>>prepare and had to put that aside
>>> pferraro: i'm down to 16 failures
>>> pferraro: serious refactoring required of the integration to get it
>>to compile and the tests to pass
>>> pferraro: need to switch to async interceptor stack in 2lc
>>integration and get all the subtle changes right
>>> pferraro: it's a painstaking job basically
>>> pferraro: i'm working on
>>https://github.com/galderz/hibernate-orm/tree/t_i9x_v2
>>> pferraro: i can't remember where i branched off, but it's a branch
>>that steve had since master was focused on 5.x
>>> pferraro: i've no idea when/where we'll integrate this, but one
>>thing is for sure: it's nowhere near backwards compatible
>>> actually, fixed one this morning, so down to 15 failures
>>> pferraro: any suggestions/wishes?
>>> is anyone out there? ;)
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] To Optional or not to Optional?

2017-05-22 Thread Galder Zamarreño
I think Sanne's right here, any differences in such large scale test are hard 
to decipher.

Also, as mentioned in a previous email, my view on its usage is same as Sanne's:

* Definitely in APIs/SPIs.
* Be gentle with it internals.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 18 May 2017, at 14:35, Sanne Grinovero  wrote:
> 
> Hi Sebastian,
> 
> sorry but I think you've been wasting time, I hope it was fun :) This is not 
> the right methodology to "settle" the matter (unless you want Radim's eyes to 
> get bloody..).
> 
> Any change in such a complex system will only affect the performance metrics 
> if you're actually addressing the dominant bottleneck. In some cases it might 
> be CPU, like if your system is at 90%+ CPU then it's likely that reviewing 
> the code to use less CPU would be beneficial; but even that can be 
> counter-productive, for example if you're having contention caused by 
> optimistic locking and you fail to address that while making something else 
> "faster" the performance loss on the optimistic lock might become asymptotic.
> 
> A good reason to avoid excessive usage of Optional (and *excessive* doesn't 
> mean a couple dozen in a millions lines of code..) is to not run out of eden 
> space, especially for all the code running in interpreted mode.
> 
> In your case you've been benchmarking a hugely complex beast, not least over 
> REST! When running the REST Server I doubt that allocation in eden is your 
> main problem. You just happened to have a couple Optionals on your path; sure 
> performance changed but there's no enough data in this way to figure out what 
> exactly happened:
>  - did it change at all or was it just because of a lucky optimisation? (The 
> JIT will always optimise stuff differently even when re-running the same code)
>  - did the overall picture improve because this code became much *less* 
> slower?
> 
> The real complexity in benchmarking is to accurately understand why it 
> changed; this should also tell you why it didn't change more, or less..
> 
> To be fair I actually agree that it's very likely that C2 can make any 
> performance penalty disappear.. that's totally possible, although it's 
> unlikely to be faster than just reading the field (assuming we don't need to 
> do branching because of null-checks but C2 can optimise that as well).
> Still this requires the code to be optimised by JIT first, so it won't 
> prevent us from creating a gazillion of instances if we abuse its usage 
> irresponsibly. Fighting internal NPEs is a matter of writing better code; I'm 
> not against some "Optional" being strategically placed but I believe it's 
> much nicer for most internal code to just avoid null, use "final", and 
> initialize things aggressively.
> 
> Sure use Optional where it makes sense, probably most on APIs and SPIs, but 
> please don't go overboard with it in internals. That's all I said in the 
> original debate.
> 
> In case you want to benchmark the impact of Optional make a JMH based 
> microbenchmark - that's interesting to see what C2 is capable of - but even 
> so that's not going to tell you much on the impact it would have to patch 
> thousands of code all around Infinispan. And it will need some peer review 
> before it can tell you anything at all ;)
> 
> It's actually a very challenging topic, as we produce libraries meant for 
> "anyone to use" and don't get to set the hardware specification requirements 
> it's hard to predict if we should optimise the system for this/that resource 
> consumption. Some people will have plenty of CPU and have problems with us 
> needing too much memory, some others will have the opposite.. the real 
> challenge is in making internals "elastic" to such factors and adaptable 
> without making it too hard to tune.
> 
> Thanks,
> Sanne
> 
> 
> 
> On 18 May 2017 at 12:30, Sebastian Laskawiec  wrote:
> Hey!
> 
> In our past we had a couple of discussions about whether we should or should 
> not use Optionals [1][2]. The main argument against it was performance. 
> 
> On one hand we risk additional object allocation (the Optional itself) and 
> wrong inlining decisions taken by C2 compiler [3]. On the other hand we all 
> probably "feel" that both of those things shouldn't be a problem and should 
> be optimized by C2. Another argument was the Optional's doesn't give us 
> anything but as I checked, we introduced nearly 80 NullPointerException bugs 
> in two years [4]. So we might consider Optional as a way of fighting those 
> things. T

Re: [infinispan-dev] In Memory Data Grid Patterns Demos from Devoxx France!

2017-05-22 Thread Galder Zamarreño
Another thing, isn't the package.json file missign dependencies?

https://github.com/vjuranek/tf-ispn-demo/blob/master/nodejs-consumer/package.json

It should have infinispan dependency, 0.4.0 or higher.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 22 May 2017, at 15:50, Galder Zamarreño  wrote:
> 
> Hey Vojtech,
> 
> Really cool demo!!
> 
> As you know, we've created an organization to keep infinispan related demos 
> called `infinispan-demos`
> 
> Can you transfer that demo to the infinispan-demos organization?
> 
> https://help.github.com/articles/about-repository-transfers/
> 
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
>> On 12 Apr 2017, at 09:13, Vojtech Juranek  wrote:
>> 
>> Thanks for sharing, nice demos!
>> 
>> On a similar data processing note, here [1] is my demo from DevConf how to 
>> use 
>> ISPN in machine learning pipeline (here the data is not processed direcly in 
>> ISPN but in TensorFlow)
>> 
>> [1] https://github.com/vjuranek/tf-ispn-demo
>> 
>> On pátek 7. dubna 2017 10:48:23 CEST Galder Zamarreño wrote:
>>> Hi all,
>>> 
>>> I've just got back from Devoxx France where Emmanuel and I presented about
>>> in-memory data grid use cases, and during this talk we presented a couple
>>> of demos on using Infinispan for offline analytics and real-time data
>>> processing.
>>> 
>>> I've just created a new blog post with some very quick instructions for you
>>> to run these demos:
>>> http://blog.infinispan.org/2017/04/in-memory-data-grid-patterns-demos-from.
>>> html
>>> 
>>> Give them a try and let us know what you think!
>>> 
>>> Cheers,
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>> 
>>> 
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] In Memory Data Grid Patterns Demos from Devoxx France!

2017-05-22 Thread Galder Zamarreño
Hey Vojtech,

Really cool demo!!

As you know, we've created an organization to keep infinispan related demos 
called `infinispan-demos`

Can you transfer that demo to the infinispan-demos organization?

https://help.github.com/articles/about-repository-transfers/

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 12 Apr 2017, at 09:13, Vojtech Juranek  wrote:
> 
> Thanks for sharing, nice demos!
> 
> On a similar data processing note, here [1] is my demo from DevConf how to 
> use 
> ISPN in machine learning pipeline (here the data is not processed direcly in 
> ISPN but in TensorFlow)
> 
> [1] https://github.com/vjuranek/tf-ispn-demo
> 
> On pátek 7. dubna 2017 10:48:23 CEST Galder Zamarreño wrote:
>> Hi all,
>> 
>> I've just got back from Devoxx France where Emmanuel and I presented about
>> in-memory data grid use cases, and during this talk we presented a couple
>> of demos on using Infinispan for offline analytics and real-time data
>> processing.
>> 
>> I've just created a new blog post with some very quick instructions for you
>> to run these demos:
>> http://blog.infinispan.org/2017/04/in-memory-data-grid-patterns-demos-from.
>> html
>> 
>> Give them a try and let us know what you think!
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] REST Refactoring - breaking changes

2017-05-22 Thread Galder Zamarreño
All look good to me :)

Thanks Sebastian!
--
Galder Zamarreño
Infinispan, Red Hat

> On 16 May 2017, at 11:05, Sebastian Laskawiec  wrote:
> 
> Hey guys!
> 
> I'm working on REST Server refactoring and I changed some of the previous 
> behavior. Having in mind that we are implementing this in a minor release, I 
> tried to make those changes really cosmetic:
>   • RestEASY as well as Servlet API have been removed from modules and 
> BOM. If your app relied on it, you'll need to specify them separately in your 
> pom.
>   • Previous implementation picked application/text as a default content 
> type. I replaced it with text/plain with charset which is more precise and 
> seems to be more widely adopted.
>   • Putting an entry without any TTL nor Idle Time made it living forever 
> (which was BTW aligned with the docs). I switched to server configured 
> defaults in this case. If you want to have an entry that lives forever, just 
> specify 0 or -1 there.
>   • Requesting an entry with wrong mime type (imagine it was stored using 
> application/octet-stream and now you're requesting text/plain) cased Bad 
> Request. Now I switched it to Not Acceptable which was designed specially to 
> cover this type of use case.
>   • In compatibility mode the server often tried to "guess" the mimetype 
> (the decision was often between text/plain and application/octet-stream). I 
> honestly think it was a wrong move and made the server side code very hard to 
> read and predict what would be the result. Now the server always returns 
> text/plain by default. If you want to get a byte stream back, just add 
> `Accept: application/octet-stream`.
>   • The server can be started with port 0. This way you are 100% sure 
> that it will start using a unique port without colliding with any other 
> service.
>   • The REST server hosts HTML page if queried using GET on default 
> context. I think it was a bug that it didn't work correctly before.
>   • UTF-8 charset is now the default. You may always ask the server to 
> return different encoding using Accept header. The charset is not returned 
> with binary mime types.
>   • If a HEAD request results in an error, a message will be returned to 
> the client. Even though this behavior breaks Commons HTTP Client (HEAD 
> requests are handled slightly differently and causes the client to hang if a 
> payload is returned), I think it's beneficial to tell the user what went 
> wrong. It's worth to mention that Jetty/Netty HTTP clients work correctly.
>   • RestServer doesn't implement Lifecycle now. The protocol server 
> doesn't support start() method without any arguments. You always need to 
> specify configuration + Embedded Cache Manager.
> Even though it's a long list, I think all those changes were worth it. Please 
> let me know if you don't agree.
> 
> Thanks,
> Sebastian
> 
> -- 
> SEBASTIAN ŁASKAWIEC
> INFINISPAN DEVELOPER
> Red Hat EMEA
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] IRC chat: HB + I9

2017-05-16 Thread Galder Zamarreño
I'm on the move, not sure if Paul/Radim saw my replies:

 galderz, rvansa: Hey guys - is there a plan for Hibernate &
ISPN 9?
 pferraro: Galder has been working on that
 pferraro: though I haven't seen any results but a list of
stuff that needs to be changed
 galderz: which Hibernate branch are you targeting?
 pferraro: 5.2, but there are minute differences between 5.x
in terms of the parts that need love to get Infinispan 9 support
*** Mode change: +v vblagoje on #infinispan by ChanServ
(ChanServ@services.)
 rvansa: are you suggesting that 5.0 or 5.1 branches will be
adapted to additionally support infinispan 9?  how is that
possible?
> pferraro: i'm working on it as we speak...
> pferraro: down to 16 failuresd
> pferraro: i started a couple of months ago, but had talks/demos to
prepare
> pferraro: i've got back to working on it this week
...
> pferraro: rvansa 
> rvansa: minute differences my ass ;p
> pferraro: did you see my replies?
> i got disconnected while replying...
 hmm - no - I didn't
 galderz: ^
> pferraro: so, working on the HB + I9 integration as we speak
> pferraro: i started a couple of months back but had talks/demos to
prepare and had to put that aside
> pferraro: i'm down to 16 failures
> pferraro: serious refactoring required of the integration to get it
to compile and the tests to pass
> pferraro: need to switch to async interceptor stack in 2lc
integration and get all the subtle changes right
> pferraro: it's a painstaking job basically
> pferraro: i'm working on
https://github.com/galderz/hibernate-orm/tree/t_i9x_v2
> pferraro: i can't remember where i branched off, but it's a branch
that steve had since master was focused on 5.x
> pferraro: i've no idea when/where we'll integrate this, but one
thing is for sure: it's nowhere near backwards compatible
> actually, fixed one this morning, so down to 15 failures
> pferraro: any suggestions/wishes?
> is anyone out there? ;)

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] to be a command, or not to be a command, that is the question

2017-05-15 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 9 May 2017, at 20:39, Radim Vansa  wrote:
> 
> On 05/08/2017 09:58 AM, Galder Zamarreño wrote:
>> Hey Katia,
>> 
>> Sorry for delay replying back! I'm surprised there has not been more 
>> feedback. My position on this is well known around the team, so let me 
>> summarise it:
>> 
>> My feeling has always been that we have too many commands and we should 
>> reduce number of commands. Part of the functional map experiment was to show 
>> with a subset of commands, all sorts of front end operations could be 
>> exposed. So, I'm on Radim's side on this. By passing functions/lambdas, we 
>> get a lot of flexibility with very little cost. IOW, we can add more 
>> operations by just passing in different lambdas to existing commands.
>> 
>> However, it is true that having different front API methods that only differ 
>> in the lambda makes it initially hard to potentially do different things for 
>> each, but couldn't that be solved with some kind of enum?
>> 
>> Although enums are useful, they're a bit limited, e.g. don't take params, so 
>>  since you've done Scala before, maybe this could be solved with some 
>> Scala-like sealed trait for each front end operation type? I used something 
>> like a sealed trait for implementing a more flexible flag system for 
>> functional map API called org.infinispan.commons.api.functional.Param
> 
> Do I understand correctly that you're suggesting to add a enum to 
> ReadWriteKeyValueCommand that will say "behave like eval 
> (current)/compute*/merge"? How is that different from just wrapping the 
> 'user function' into adapting function (with registered externalizer == 
> marshalling to just 1-2 bytes)?
> 
> Handling such enum in interceptors is not better that having additional 
> visitX method. And not handling that does not allow you to apply 
> optimizations which Katia has named as reason #1 to have the separate 
> commands.

TBH, ideally I wouldn't like to have any enums at all since that defeats the 
purpouse of having commands that have transparent lambdas. The commands 
themselves, whether Read-Only, Read-Write, Write-Only should be enough 
distinction to do that what you need to do...

However, in real life, I'm not 100% sure if that'd be enough to do what we 
do... Maybe better than enums, there could be special lambda-bearing commands.

> 
>> The problem I have with adding more commands is the explosion that it 
>> provokes in terms of code, with all the required visit* method impls all 
>> over the place...etc.
>> 
>> I personally think that the lack of a more flexible command architecture is 
>> what has stopped us from adding front-end operations more quickly (e.g. 
>> counters, multi-maps...etc). IMO, working with generic commands that take 
>> lambdas is a way to strike a balance between adding front-end operations 
>> quickly and not resulting in a huge explosion of commands.
> 
> So your final verdict is -1 to separate commands?

Yeah.

However, I'd say that this is all semi-internal implementation detail and we 
can change relatively easily. So even if work has already been done using 
separate commands, we should be able to change that down the line.

I call semi-internal because since our interceptor stack is configurable by the 
user, an advanced user might some day add an interceptor that visits a certain 
command...

Cheers,

> 
> R.
> 
> PS: besides DRY, I vote for the use of functional commands is that it 
> would encourage us to fix the rest of the parts that might not be 
> working properly - e.g. QueryInterceptor was not updated with the 
> functional stuff (but QI is broken in more ways [1])
> 
> [1] https://issues.jboss.org/browse/ISPN-7806
> 
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>>> On 20 Apr 2017, at 16:06, Katia Aresti  wrote:
>>> 
>>> Hi all
>>> 
>>> Well, nobody spoke, so I consider that everybody agrees that I can take a 
>>> decision like a big girl by myself ! :)
>>> 
>>> I'm going to add 3 new commands, for merge, compute&computeIfPresent and 
>>> computeIfAbsent. So I won't use the actual existing commands for the 
>>> implementation : ReadWriteKeyCommand and ReadWriteKeyValueCommand even if 
>>> I'm a DRY person and I love reusing code, I'm a KISS person too.
>>> 
>>> I tested the implementation using these functional commands and IMHO :
>>> - merge and compute methods worth their own commands, they are very useful 
&

Re: [infinispan-dev] Running an Infinispan cluster on Kubernetes / Google Container Engine

2017-05-12 Thread Galder Zamarreño
Awesome!!! Can't wait to try it out :)

--
Galder Zamarreño
Infinispan, Red Hat

> On 8 May 2017, at 17:14, Bela Ban  wrote:
> 
> FYI: http://belaban.blogspot.ch/2017/05/running-infinispan-cluster-with.html
> -- 
> Bela Ban | http://www.jgroups.org
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Hot Rod secured by default

2017-05-09 Thread Galder Zamarreño
Hi all,

Tristan and I had chat yesterday and I've distilled the contents of the 
discussion and the feedback here into a JIRA [1]. The JIRA contains several 
subtasks to handle these aspects:

1. Remove auth check in server's CacheDecodeContext.
2. Default server configuration should require authentication in all entry 
points.
3. Provide an unauthenticated configuration that users can easily switch to.
4. Remove default username+passwords in docker image and instead show an 
info/warn message when these are not provided.
5. Add capability to pass in app user role groups to docker image easily, so 
that its easy to add authorization on top of the server.

Cheers,

[1] https://issues.jboss.org/browse/ISPN-7811
--
Galder Zamarreño
Infinispan, Red Hat

> On 19 Apr 2017, at 12:04, Tristan Tarrant  wrote:
> 
> That is caused by not wrapping the calls in PrivilegedActions in all the 
> correct places and is a bug.
> 
> Tristan
> 
> On 19/04/2017 11:34, Sebastian Laskawiec wrote:
>> The proposal look ok to me.
>> 
>> But I would also like to highlight one thing - it seems you can't access 
>> secured cache properties using CLI. This seems wrong to me (if you can 
>> invoke the cli, in 99,99% of the cases you have access to the machine, 
>> so you can do whatever you want). It also breaks healthchecks in Docker 
>> image.
>> 
>> I would like to make sure we will address those concerns.
>> 
>> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant > <mailto:ttarr...@redhat.com>> wrote:
>> 
>>Currently the "protected cache access" security is implemented as
>>follows:
>> 
>>- if authorization is enabled || client is on loopback
>>allow
>> 
>>The first check also implies that authentication needs to be in place,
>>as the authorization checks need a valid Subject.
>> 
>>Unfortunately authorization is very heavy-weight and actually overkill
>>even for "normal" secure usage.
>> 
>>My proposal is as follows:
>>- the "default" configuration files are "secure" by default
>>- provide clearly marked "unsecured" configuration files, which the user
>>can use
>>- drop the "protected cache" check completely
>> 
>>And definitely NO to a dev switch.
>> 
>>Tristan
>> 
>>On 19/04/2017 10:05, Galder Zamarreño wrote:
>>> Agree with Wolf. Let's keep it simple by just providing extra
>>configuration files for dev/unsecure envs.
>>> 
>>> Cheers,
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>> 
>>>> On 15 Apr 2017, at 12:57, Wolf Fink ><mailto:wf...@redhat.com>> wrote:
>>>> 
>>>> I would think a "switch" can have other impacts as you need to
>>check it in the code - and might have security leaks here
>>>> 
>>>> So what is wrong with some configurations which are the default
>>and secured.
>>>> and a "*-dev or *-unsecure" configuration to start easy.
>>>> Also this can be used in production if there is no need for security
>>>> 
>>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec
>>    mailto:slask...@redhat.com>> wrote:
>>>> I still think it would be better to create an extra switch to
>>run infinispan in "development mode". This means no authentication,
>>no encryption, possibly with JGroups stack tuned for fast discovery
>>(especially in Kubernetes) and a big warning saying "You are in
>>development mode, do not use this in production".
>>>> 
>>>> Just something very easy to get you going.
>>>> 
>>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarreño
>>mailto:gal...@redhat.com>> wrote:
>>>> 
>>>> --
>>>> Galder Zamarreño
>>>> Infinispan, Red Hat
>>>> 
>>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes
>>mailto:gust...@infinispan.org>> wrote:
>>>>> 
>>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarreño
>>mailto:gal...@redhat.com>> wrote:
>>>>> Hi all,
>>>>> 
>>>>> As per some discussions we had yesterday on IRC w/ Tristan,
>>Gustavo and Sebastian, I've created a docker image snapshot that
>>reverts the change stop protected caches from requiring security
>>enabled [1].
>>>>> 
>>>>> In other words, I've removed

Re: [infinispan-dev] to be a command, or not to be a command, that is the question

2017-05-08 Thread Galder Zamarreño
Hey Katia,

Sorry for delay replying back! I'm surprised there has not been more feedback. 
My position on this is well known around the team, so let me summarise it:

My feeling has always been that we have too many commands and we should reduce 
number of commands. Part of the functional map experiment was to show with a 
subset of commands, all sorts of front end operations could be exposed. So, I'm 
on Radim's side on this. By passing functions/lambdas, we get a lot of 
flexibility with very little cost. IOW, we can add more operations by just 
passing in different lambdas to existing commands.

However, it is true that having different front API methods that only differ in 
the lambda makes it initially hard to potentially do different things for each, 
but couldn't that be solved with some kind of enum?

Although enums are useful, they're a bit limited, e.g. don't take params, so  
since you've done Scala before, maybe this could be solved with some Scala-like 
sealed trait for each front end operation type? I used something like a sealed 
trait for implementing a more flexible flag system for functional map API 
called org.infinispan.commons.api.functional.Param

The problem I have with adding more commands is the explosion that it provokes 
in terms of code, with all the required visit* method impls all over the 
place...etc.

I personally think that the lack of a more flexible command architecture is 
what has stopped us from adding front-end operations more quickly (e.g. 
counters, multi-maps...etc). IMO, working with generic commands that take 
lambdas is a way to strike a balance between adding front-end operations 
quickly and not resulting in a huge explosion of commands.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 20 Apr 2017, at 16:06, Katia Aresti  wrote:
> 
> Hi all
> 
> Well, nobody spoke, so I consider that everybody agrees that I can take a 
> decision like a big girl by myself ! :) 
> 
> I'm going to add 3 new commands, for merge, compute&computeIfPresent and 
> computeIfAbsent. So I won't use the actual existing commands for the 
> implementation : ReadWriteKeyCommand and ReadWriteKeyValueCommand even if I'm 
> a DRY person and I love reusing code, I'm a KISS person too.
> 
> I tested the implementation using these functional commands and IMHO :
> - merge and compute methods worth their own commands, they are very useful 
> and we might want to adjust/optimize them individually 
> - there are some technical issues related to the 
> TypeConverterDelegatingAdvancedCache that makes me modify these existing 
> functional commands with some hacky code that, for me, should be kept in 
> commands like merge or compute with the correct documentation. They don't 
> belong to a generic command.
> - Functional API is experimental right now. It might be non experimental in 
> the near future, but we might decide to move to another thing. The 3 commands 
> are already "coded" in my branches (not everything reviewed yet but soon). If 
> one day we decide to change/simplify or we find a nice way to get rid of 
> commands with a more generic one, removing and simplifying should be less 
> painful than adding commands for these methods.
> 
> That's all !
> 
> Cheers
> 
> Katia
> 
> 
> 
> On Wed, Apr 12, 2017 at 12:11 PM, Katia Aresti  wrote:
> Hi all,
> 
> As you might know I'm working since my arrival, among other things, on 
> ISPN-5728 Jira [1], where the idea is to override the default ConcurrentMap 
> methods that are missing in CacheImpl (merge, replaceAll, compute ... )
> 
> I've created a pull-request [2] for compute, computeIfAbsent and 
> computeIfPresent methods, creating two new commands. By the way, I did the 
> same thing for the merge method in a branch that I haven't pull requested yet.
> 
> There is an opposite view between Radim and Will concerning the 
> implementation of these methods. To make it short :
> In one side Will considers compute/merge best implementation should be as a 
> new Command (so what is already done)
> In the other side, Radim considers adding another command is not necessary as 
> we could simple implement these methods using ReadWriteKeyCommand
> 
> The detailed discussion and arguments of both sides is on GitHub [2]
> 
> Before moving forward and making any choice by myself, I would like to hear 
> your opinions. For the record, it doesn't bother me redoing everything if 
> most people think like Radim because working on commands has helped me to 
> learn and understand more about infinispan internals, so this hasn't been a 
> waste of time for me.
> 
> Katia
> 
> [1] https://issues.jboss.org/browse/ISPN-5728
> [2] https://github.com/infinispan/infinis

Re: [infinispan-dev] HotRod client TCK

2017-05-08 Thread Galder Zamarreño
Btw, thanks Anna for working on this!

I've had a look at the list and I have some questions:

* HotRodAsyncReplicationTest: I don't think it should be a client TCK test. 
There's nothing the client does differently compared to executing against a 
sync repl cache. If anything, it's a server TCK test since it verifies that a 
put sent by a HR client gets replicated. The same applies to all the test of 
local vs REPl vs DIST tests.

* LockingTest: same story, this is a client+server integration test, I don't 
think it's a client TCK test. If anything, it's a server TCK test. It verifies 
that if a client sends a put, the entry is locked.

* MixedExpiry*Test: it's dependant on the server configuration, not really a 
client TCK test IMO. I think the only client TCK tests that deal with expiry 
should only verify that the entry is expirable if the client decides to make it 
expirable.

* ClientListenerRemoveOnStopTest: Not sure this is a client TCK test. Yeah, it 
verifies that the client removes its listeners on stop, but it's not a Hot Rod 
protocol TCK test. Going back to what Radim said, how are you going to verify 
each client does this? What we can verify for all clients easily is they send 
the commands to remove the client servers to the server. Maybe for these and 
below client specific logic related tests, as Martin suggesteds, we go with the 
approach of just verifying that tests exist.

* Protobuf marshaller tests: client specific and testing client-side 
marshalling logic. Same reasons above.

* Near caching tests: client specific and testing client-side near caching 
logic. Same issues above.

* Topology change tests: I consider these TCK tests cos you could think that if 
the server sends a new topology, the client's next command should have the ID 
of this topology in its header.

* Failover/Retry tests: client specific and testing client-side retry logic. 
Same issues above, how do you verify it works accross the board for all clients?

* Socket timeout tests: again these are client specific...

I think in general it'd be a good idea to try to verify somehow most of the TCK 
via some server-side logic, as Radim hinted, and where that's not possible, 
revert to just verifying the client has tests to cover certain scenarios.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 Apr 2017, at 14:33, Martin Gencur  wrote:
> 
> Hello all,
> we have been working on https://issues.jboss.org/browse/ISPN-7120.
> 
> Anna has finished the first step from the JIRA - collecting information 
> about tests in the Java HotRod client test suite (including server 
> integration tests) and it is now prepared for wider review.
> 
> She created a spreadsheet [1]. The spread sheet includes for each Java 
> test its name, the suggested target package in the TCK, whether to 
> include it in the TCK or not, and some other notes. The suggested 
> package also poses grouping for the tests (e.g. tck.query, tck.near, 
> tck.xsite, ...)
> 
> Let me add that right now the goal is not to create a true TCK [2]. The 
> goal is to make sure that all implementations of the HotRod protocol 
> have sufficient test coverage and possibly the same server side of the 
> client-server test (including the server version and configuration).
> 
> What are the next step?
> 
> * Please review the list (at least a quick look) and see if some of the 
> tests which are NOT suggested for the TCK should be added or vice versa.
> * I suppose the next step would then be to check other implementations 
> (C#, C++, NodeJS, ..) and identify tests which are missing there (there 
> will surely be some).
> * Gradually implement the missing tests in the other implementations
>   Note: Here we should ensure that the server is configured in the same 
> way for all implementations. One way to achieve this (thanks Anna for 
> suggestion!) is to have a shell/batch scripts for CLI which would be 
> executed before the tests. This can probably be done for all impls. and 
> both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes 
> useless because it uses Creaper (Java) and we need a language-neutral 
> solution for configuring the server.
> 
> Some other notes:
> * there are some duplicated tests in hotrod-client and server 
> integration test suites, in this case it probably makes sense to only 
> include in the TCK the server integration test
> * tests from the hotrod-client module which are supposed to be part of 
> the TCK should be copied to the server integration test suite one day 
> (possibly later)
> 
> Please let us know what you think.
> 
> Thanks,
> Martin
> 
> 
> [1] 
> https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_EA0giQNDZWzFNPWrF5G4/edit#gid=0
> [2] https://en.wikipedia.org/wiki/Technology_Co

Re: [infinispan-dev] HotRod client TCK

2017-05-08 Thread Galder Zamarreño
I think there's some value in Radim's suggestion. The email was not fully clear 
to me initially but after reading a few times I understood what he was 
referring to. @Radim, correct me if I'm wrong...

Right now clients verify that they behave as expected, e.g. JS client uses its 
asserts, Java client uses other asserts. What Radim is trying to say is that 
there needs to be a way to verify they work adequately independent of their 
implementations.

So, the only way to do that is to verify it at the server level. Not sure what 
exactly he means by the fake server, but more than a fake server, I'd be more 
inclined to modify the server to that it can somehow act as TCK verifier. This 
is to avoid having to reimplement transport logic, protocol decoder...etc in a 
new fake server.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 Apr 2017, at 15:57, Radim Vansa  wrote:
> 
> Since these tests use real server(s), many of them test not only the 
> client behaviour (generating correct commands according to the 
> protocol), but server, too. While this is practical (we need to test 
> server somehow, too), there's nothing all the tests across languages 
> will have physically in common and all comparison is prone to human error.
> 
> If we want to test various implementations of the client, maybe it would 
> make sense to give the clients a fake server that will have just a 
> scenario of expected commands to receive and pre-defined responses. We 
> could use audit log to generate such scenario based on the actual Java 
> tests.
> 
> But then we'd have to test the actual behaviour on server, and we'd need 
> a way to issue the commands.
> 
> Just my 2c
> 
> Radim
> 
> On 04/11/2017 02:33 PM, Martin Gencur wrote:
>> Hello all,
>> we have been working on https://issues.jboss.org/browse/ISPN-7120.
>> 
>> Anna has finished the first step from the JIRA - collecting information
>> about tests in the Java HotRod client test suite (including server
>> integration tests) and it is now prepared for wider review.
>> 
>> She created a spreadsheet [1]. The spread sheet includes for each Java
>> test its name, the suggested target package in the TCK, whether to
>> include it in the TCK or not, and some other notes. The suggested
>> package also poses grouping for the tests (e.g. tck.query, tck.near,
>> tck.xsite, ...)
>> 
>> Let me add that right now the goal is not to create a true TCK [2]. The
>> goal is to make sure that all implementations of the HotRod protocol
>> have sufficient test coverage and possibly the same server side of the
>> client-server test (including the server version and configuration).
>> 
>> What are the next step?
>> 
>> * Please review the list (at least a quick look) and see if some of the
>> tests which are NOT suggested for the TCK should be added or vice versa.
>> * I suppose the next step would then be to check other implementations
>> (C#, C++, NodeJS, ..) and identify tests which are missing there (there
>> will surely be some).
>> * Gradually implement the missing tests in the other implementations
>>Note: Here we should ensure that the server is configured in the same
>> way for all implementations. One way to achieve this (thanks Anna for
>> suggestion!) is to have a shell/batch scripts for CLI which would be
>> executed before the tests. This can probably be done for all impls. and
>> both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes
>> useless because it uses Creaper (Java) and we need a language-neutral
>> solution for configuring the server.
>> 
>> Some other notes:
>> * there are some duplicated tests in hotrod-client and server
>> integration test suites, in this case it probably makes sense to only
>> include in the TCK the server integration test
>> * tests from the hotrod-client module which are supposed to be part of
>> the TCK should be copied to the server integration test suite one day
>> (possibly later)
>> 
>> Please let us know what you think.
>> 
>> Thanks,
>> Martin
>> 
>> 
>> [1]
>> https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_EA0giQNDZWzFNPWrF5G4/edit#gid=0
>> [2] https://en.wikipedia.org/wiki/Technology_Compatibility_Kit
>> [3] https://github.com/infinispan/infinispan/pull/5012
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> -- 
> Radim Vansa 
> JBoss Performance Team
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] All jars must go?

2017-05-04 Thread Galder Zamarreño
Hi all,

As you might already know, there's been big debates about upcoming Java 9 
module system.

Recently Stephen Colebourne, creator Joda time, posted his thoughts [1].

Stephen mentions some potential problems with all jars since no two modules 
should have same package. We know from past experience that using these jars as 
dependencies in Maven create all sorts of problems, but with the new JPMS they 
might not even work?

Have we tried all jars in Java 9? I'm wondering whether Stephen's problems with 
all jars are truly founded since Java offers no publishing itself. I mean, for 
that Stephen mentions to appear, you'd have to at runtime have an all jar and 
then individual jars, in which case it would fail. But as long as Maven does 
not enforce this in their repos, I think it's fine. If Maven starts enforcing 
this in the jars that are stored in Maven repos then yeah, we have a big 
problem.

Thoughts?

Cheers,

[1] http://blog.joda.org/2017/04/java-se-9-jpms-module-naming.html
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Hot Rod secured by default

2017-04-19 Thread Galder Zamarreño
+100

--
Galder Zamarreño
Infinispan, Red Hat

> On 19 Apr 2017, at 10:57, Tristan Tarrant  wrote:
> 
> Currently the "protected cache access" security is implemented as follows:
> 
> - if authorization is enabled || client is on loopback
>   allow
> 
> The first check also implies that authentication needs to be in place, 
> as the authorization checks need a valid Subject.
> 
> Unfortunately authorization is very heavy-weight and actually overkill 
> even for "normal" secure usage.
> 
> My proposal is as follows:
> - the "default" configuration files are "secure" by default
> - provide clearly marked "unsecured" configuration files, which the user 
> can use
> - drop the "protected cache" check completely
> 
> And definitely NO to a dev switch.
> 
> Tristan
> 
> On 19/04/2017 10:05, Galder Zamarreño wrote:
>> Agree with Wolf. Let's keep it simple by just providing extra configuration 
>> files for dev/unsecure envs.
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>>> On 15 Apr 2017, at 12:57, Wolf Fink  wrote:
>>> 
>>> I would think a "switch" can have other impacts as you need to check it in 
>>> the code - and might have security leaks here
>>> 
>>> So what is wrong with some configurations which are the default and secured.
>>> and a "*-dev or *-unsecure" configuration to start easy.
>>> Also this can be used in production if there is no need for security
>>> 
>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec  
>>> wrote:
>>> I still think it would be better to create an extra switch to run 
>>> infinispan in "development mode". This means no authentication, no 
>>> encryption, possibly with JGroups stack tuned for fast discovery 
>>> (especially in Kubernetes) and a big warning saying "You are in development 
>>> mode, do not use this in production".
>>> 
>>> Just something very easy to get you going.
>>> 
>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarreño  wrote:
>>> 
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>> 
>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes  wrote:
>>>> 
>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarreño  
>>>> wrote:
>>>> Hi all,
>>>> 
>>>> As per some discussions we had yesterday on IRC w/ Tristan, Gustavo and 
>>>> Sebastian, I've created a docker image snapshot that reverts the change 
>>>> stop protected caches from requiring security enabled [1].
>>>> 
>>>> In other words, I've removed [2]. The reason for temporarily doing that is 
>>>> because with the change as is, the changes required for a default server 
>>>> distro require that the entire cache manager's security is enabled. This 
>>>> is in turn creates a lot of problems with health and running checks used 
>>>> by Kubernetes/OpenShift amongst other things.
>>>> 
>>>> Judging from our discussions on IRC, the idea is for such change to be 
>>>> present in 9.0.1, but I'd like to get final confirmation from Tristan et 
>>>> al.
>>>> 
>>>> 
>>>> +1
>>>> 
>>>> Regarding the "security by default" discussion, I think we should ship 
>>>> configurations cloud.xml, clustered.xml and standalone.xml with security 
>>>> enabled and disabled variants, and let users
>>>> decide which one to pick based on the use case.
>>> 
>>> I think that's a better idea.
>>> 
>>> We could by default have a secured one, but switching to an insecure 
>>> configuration should be doable with minimal effort, e.g. just switching 
>>> config file.
>>> 
>>> As highlighted above, any secured configuration should work out-of-the-box 
>>> with our docker images, e.g. WRT healthy/running checks.
>>> 
>>> Cheers,
>>> 
>>>> 
>>>> Gustavo.
>>>> 
>>>> 
>>>> Cheers,
>>>> 
>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ 
>>>> (9.0.1-SNAPSHOT tag for anyone interested)
>>>> [2] 
>>>> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118
>>>>

Re: [infinispan-dev] Hot Rod secured by default

2017-04-19 Thread Galder Zamarreño
Agree with Wolf. Let's keep it simple by just providing extra configuration 
files for dev/unsecure envs.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 15 Apr 2017, at 12:57, Wolf Fink  wrote:
> 
> I would think a "switch" can have other impacts as you need to check it in 
> the code - and might have security leaks here
> 
> So what is wrong with some configurations which are the default and secured.
> and a "*-dev or *-unsecure" configuration to start easy.
> Also this can be used in production if there is no need for security
> 
> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec  
> wrote:
> I still think it would be better to create an extra switch to run infinispan 
> in "development mode". This means no authentication, no encryption, possibly 
> with JGroups stack tuned for fast discovery (especially in Kubernetes) and a 
> big warning saying "You are in development mode, do not use this in 
> production".
> 
> Just something very easy to get you going.
> 
> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarreño  wrote:
> 
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
> > On 13 Apr 2017, at 09:50, Gustavo Fernandes  wrote:
> >
> > On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarreño  wrote:
> > Hi all,
> >
> > As per some discussions we had yesterday on IRC w/ Tristan, Gustavo and 
> > Sebastian, I've created a docker image snapshot that reverts the change 
> > stop protected caches from requiring security enabled [1].
> >
> > In other words, I've removed [2]. The reason for temporarily doing that is 
> > because with the change as is, the changes required for a default server 
> > distro require that the entire cache manager's security is enabled. This is 
> > in turn creates a lot of problems with health and running checks used by 
> > Kubernetes/OpenShift amongst other things.
> >
> > Judging from our discussions on IRC, the idea is for such change to be 
> > present in 9.0.1, but I'd like to get final confirmation from Tristan et al.
> >
> >
> > +1
> >
> > Regarding the "security by default" discussion, I think we should ship 
> > configurations cloud.xml, clustered.xml and standalone.xml with security 
> > enabled and disabled variants, and let users
> > decide which one to pick based on the use case.
> 
> I think that's a better idea.
> 
> We could by default have a secured one, but switching to an insecure 
> configuration should be doable with minimal effort, e.g. just switching 
> config file.
> 
> As highlighted above, any secured configuration should work out-of-the-box 
> with our docker images, e.g. WRT healthy/running checks.
> 
> Cheers,
> 
> >
> > Gustavo.
> >
> >
> > Cheers,
> >
> > [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ 
> > (9.0.1-SNAPSHOT tag for anyone interested)
> > [2] 
> > https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118
> > --
> > Galder Zamarreño
> > Infinispan, Red Hat
> >
> > > On 30 Mar 2017, at 14:25, Tristan Tarrant  wrote:
> > >
> > > Dear all,
> > >
> > > after a mini chat on IRC, I wanted to bring this to everybody's attention.
> > >
> > > We should make the Hot Rod endpoint require authentication in the
> > > out-of-the-box configuration.
> > > The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL
> > > mechanism against the ApplicationRealm and require users to run the
> > > add-user script.
> > > This would achieve two goals:
> > > - secure out-of-the-box configuration, which is always a good idea
> > > - access to the "protected" schema and script caches which is prevented
> > > when not on loopback on non-authenticated endpoints.
> > >
> > > Tristan
> > > --
> > > Tristan Tarrant
> > > Infinispan Lead
> > > JBoss, a division of Red Hat
> > > ___
> > > infinispan-dev mailing list
> > > infinispan-dev@lists.jboss.org
> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
> > ___
> > infinispan-dev mailing list
> > infinispan-dev@lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> > ___
> > infinispan-dev mailing list
> > infinispan-dev@lists

Re: [infinispan-dev] Native Infinispan Multimap support

2017-04-13 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 5 Apr 2017, at 10:05, Sebastian Laskawiec  wrote:
> 
> I love the idea of starting with a simple interface, so +1000 from me. 
> 
> I'm also assuming that our new MultiMap will be accessible in both Embedded 
> and Client/Server mode, am I correct? I also think CacheMultimap should 
> extend Iterable. I suspect some of our users might want to use for-each loop 
> with it.

H, that would only really work for a synchronous API version. For async 
you'd something like Traversable that we did for the functional map experiment.

> Finally, we also need to think about some integration bits (maybe not for the 
> initial implementation but it might be beneficial to create JIRAs for them). 
> With CDI and Spring support we can make them super easy to use (by injecting 
> newly created instances to the users code: @Inject CacheMultimap 
> myMap). 
> 
> I also put some more comments below. Nice proposal Katia!
> 
> On Tue, Apr 4, 2017 at 7:09 PM William Burns  wrote:
> On Tue, Apr 4, 2017 at 11:45 AM Katia Aresti  wrote:
> Hi all,
> 
> As you probably know, Will and I are working on the vert-x infinispan 
> integration [1], where the primary goal is to make infinispan the default 
> cluster management of vert-x. (yeah!)
> Vert-x needs support for an Async Multimap. Today's implementation is a 
> wrapper on a normal Cache where only Cache Key's are used to implement the 
> multi map [2].
> This is not very efficient, so after trying some other alternative 
> implementations [3] that don't fully work (injection not working), Will and I 
> have come to the conclusion that it might be a good idea to start having our 
> own native CacheMultimap. This first multimap won't support duplicate values 
> on key's.
> 
> As a quick start, the smallest multimap we need should implement the 
> following interface :
> 
> I agree that having a very slim API to start should be better since we know 
> how much trouble we get into implementing a very large API like ConcurrentMap 
> :) 
> public interface CacheMultimap {
>  V put(K key, V value);
> This should probably return a boolean or Void. I am leaning towards the 
> first, but I am open either way.
> 
> Could you please tell me more why are you suggesting boolean or void? 
> Returning previous value would make it more similar to a Map.
> 
>  Collection get(K key);
> 
>  boolean remove(K key, V value);
> We probably want a `boolean remove(K key)` method as well that removes all 
> values mapped to the given key.
> 
> +1
> 
> }
> CacheMultimapImpl will be a wrapper on a normal Cache, similar to [3].
> 
> We could add a new method in EmbeddedCacheManager.java
> 
>  CacheMultimap getCacheMultimap(String cacheName, boolean 
> createIfAbsent);
> 
> How about the other way around? Something like:
> static  CacheMultimap CacheMultimap.create(BasicCache cache);
> 
> This way we would avoid dependency from DefaultCacheManager to CacheMultimap. 
> If we wanted to support both Embedded/Client Server mode we would probably 
> need to use BasicCache as a parameter. The last argument for this solution is 
> that creating producers in CDI/Spring would be trivial (we would just need to 
> provide a generic producer method and with some luck that would be it).
> 
> 
> I was thinking maybe this would exist in a separate module (outside of core)? 
> or class that wraps (similar to DistributedExecutor) instead. My worry is 
> about transactions, since the entry point to that is through Cache interface. 
> The other option is we could add a `getCache` method on the `CacheMultiMap`.
> 
> If we want to support both Embedded/Client Server mode, it should go to 
> commons. Otherwise I would vote for core.
> 
> 
> 
> 
> Implementation will create a cache as always and return a new 
> CacheMultimapImpl(cache). 
> 
> What do you think ? Please fell free to suggest any other alternative or idea.
> 
> Cheers
> 
> Katia
> 
> [1] https://github.com/vert-x3/vertx-infinispan
> 
> [2] 
> https://github.com/vert-x3/vertx-infinispan/blob/master/src/main/java/io/vertx/ext/cluster/infinispan/impl/InfinispanAsyncMultiMap.java
> 
> [3] https://gist.github.com/karesti/194bb998856d4a2828d83754130ed79c
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> -- 
> SEBASTIAN ŁASKAWIEC
> INFINISPAN DEVELOPER
> Red Hat EMEA
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Native Infinispan Multimap support

2017-04-13 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 6 Apr 2017, at 11:04, Radim Vansa  wrote:
> 
> On 04/06/2017 12:15 AM, Katia Aresti wrote:
>> 
>> 
>> On Wed, Apr 5, 2017 at 9:56 AM, Radim Vansa > <mailto:rva...@redhat.com>> wrote:
>> 
>>   On 04/04/2017 06:40 PM, William Burns wrote:
>>> 
>>> 
>>> On Tue, Apr 4, 2017 at 11:45 AM Katia Aresti >   <mailto:kare...@redhat.com>
>>> <mailto:kare...@redhat.com <mailto:kare...@redhat.com>>> wrote:
>>> 
>>>   Hi all,
>>> 
>>>   As you probably know, Will and I are working on the vert-x
>>>   infinispan integration [1], where the primary goal is to make
>>>   infinispan the default cluster management of vert-x. (yeah!)
>>>   Vert-x needs support for an Async Multimap. Today's
>>   implementation
>>>   is a wrapper on a normal Cache where only Cache Key's are
>>   used to
>>>   implement the multi map [2].
>>>   This is not very efficient, so after trying some other
>>   alternative
>>>   implementations [3] that don't fully work (injection not
>>   working),
>>>   Will and I have come to the conclusion that it might be a good
>>>   idea to start having our own native CacheMultimap. This first
>>>   multimap won't support duplicate values on key's.
>>> 
>>>   As a quick start, the smallest multimap we need should implement
>>>   the following interface :
>>> 
>>> I agree that having a very slim API to start should be better
>>   since we
>>> know how much trouble we get into implementing a very large API like
>>> ConcurrentMap :)
>>> 
>>>   public interface CacheMultimap {
>>> 
>> 
>>   I don't see anything async in this interface. If that's async, provide
>>   CompletableFuture return values.
>>   I am also considering if we want any fire & forget variants for these
>>   operations, but since we have to do retries to achieve consistency
>>   (and
>>   therefore we need some messages from owners to originator), I wouldn't
>>   include them.
>> 
>> 
>> Today's vert-x API calls the vertx.executeBlocking(future => cache...)
>> 
>> I considered the option of CompletableFuture, but for simplicity I 
>> suggested the basic method.
>> Today's CacheAPI makes a difference between "put" and "putAsync". 
>> Would you call the interface CacheMultimapAsync or CacheMultimap with 
>> addAsyc method ?
> 
> "In a perfect world, there will be no war or hunger, all APIs will be 
> written asynchronously and bunny rabbits will skip hand-in-hand with 
> baby lambs across sunny green meadows." (quoting Vert.x docs)
> 
> While minimalistic API is a good way to start, it shouldn't contain 
> anything we'd want to get rid of in close future. And especially since 
> the main drive for multimaps is Vert.x which consumes asynchronous APIs 
> (and has support for legacy synchronous APIs, the executeBlocking 
> method), we should have the design adapted to that from the beginning.

Amen!

> CompletableFuture is not a rocket science, and you can use the already 
> asynchronous Infinispan internals.

Indeed! 

CompletableFuture is good. 

In hindsight, I would have maybe chosen java.util.concurrent.CompletionStage 
since it's more flexible (interface vs class), and doesn't bring in 
java.util.concurrent.Future which contains blocking methods.

CompletableFuture/CompletionStage works for single returns. The bigger problem 
is when you want multiple returns asynchronously. Here a can of worms opens up, 
e.g. do you push the results? do you pull the results?

For the functional map API, we experimented with a pull model using 
Traversable. A push model is harder to implement and  non-trivial, and there 
you're getting into Rx territory.

> I don't think we should have two interfaces, I believe that single 
> interface with async methods only is absolutely sufficient.

^ I'm not so sure actually... Both sets of methods are used for different use 
cases. Conceptually and from a user's perspective, I'd rather have separate 
interfaces since I'd not expect calls to both use cases to be interleaved.

> Though I 
> wouldn't add the *Async suffix at all there. If someone wants to execute 
> the methods synchronously he can call .get() or .join() - just 6/7 
> characters more.

^ We shouldn't promote calling Future.get() for asynchronous APIs since it goes 
against everything that async APIs stand for ;)

> 
>> 
>>>   V put(K key,V val

Re: [infinispan-dev] Hot Rod secured by default

2017-04-13 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 13 Apr 2017, at 09:50, Gustavo Fernandes  wrote:
> 
> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarreño  wrote:
> Hi all,
> 
> As per some discussions we had yesterday on IRC w/ Tristan, Gustavo and 
> Sebastian, I've created a docker image snapshot that reverts the change stop 
> protected caches from requiring security enabled [1].
> 
> In other words, I've removed [2]. The reason for temporarily doing that is 
> because with the change as is, the changes required for a default server 
> distro require that the entire cache manager's security is enabled. This is 
> in turn creates a lot of problems with health and running checks used by 
> Kubernetes/OpenShift amongst other things. 
> 
> Judging from our discussions on IRC, the idea is for such change to be 
> present in 9.0.1, but I'd like to get final confirmation from Tristan et al.
> 
> 
> +1
> 
> Regarding the "security by default" discussion, I think we should ship 
> configurations cloud.xml, clustered.xml and standalone.xml with security 
> enabled and disabled variants, and let users
> decide which one to pick based on the use case.

I think that's a better idea. 

We could by default have a secured one, but switching to an insecure 
configuration should be doable with minimal effort, e.g. just switching config 
file.

As highlighted above, any secured configuration should work out-of-the-box with 
our docker images, e.g. WRT healthy/running checks.

Cheers,

> 
> Gustavo.
> 
>  
> Cheers,
> 
> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ (9.0.1-SNAPSHOT 
> tag for anyone interested)
> [2] 
> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
> > On 30 Mar 2017, at 14:25, Tristan Tarrant  wrote:
> >
> > Dear all,
> >
> > after a mini chat on IRC, I wanted to bring this to everybody's attention.
> >
> > We should make the Hot Rod endpoint require authentication in the
> > out-of-the-box configuration.
> > The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL
> > mechanism against the ApplicationRealm and require users to run the
> > add-user script.
> > This would achieve two goals:
> > - secure out-of-the-box configuration, which is always a good idea
> > - access to the "protected" schema and script caches which is prevented
> > when not on loopback on non-authenticated endpoints.
> >
> > Tristan
> > --
> > Tristan Tarrant
> > Infinispan Lead
> > JBoss, a division of Red Hat
> > ___
> > infinispan-dev mailing list
> > infinispan-dev@lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Hot Rod secured by default

2017-04-13 Thread Galder Zamarreño
Hi all,

As per some discussions we had yesterday on IRC w/ Tristan, Gustavo and 
Sebastian, I've created a docker image snapshot that reverts the change stop 
protected caches from requiring security enabled [1]. 

In other words, I've removed [2]. The reason for temporarily doing that is 
because with the change as is, the changes required for a default server distro 
require that the entire cache manager's security is enabled. This is in turn 
creates a lot of problems with health and running checks used by 
Kubernetes/OpenShift amongst other things.

Judging from our discussions on IRC, the idea is for such change to be present 
in 9.0.1, but I'd like to get final confirmation from Tristan et al.

Cheers,

[1] https://hub.docker.com/r/galderz/infinispan-server/tags/ (9.0.1-SNAPSHOT 
tag for anyone interested)
[2] 
https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118
--
Galder Zamarreño
Infinispan, Red Hat

> On 30 Mar 2017, at 14:25, Tristan Tarrant  wrote:
> 
> Dear all,
> 
> after a mini chat on IRC, I wanted to bring this to everybody's attention.
> 
> We should make the Hot Rod endpoint require authentication in the 
> out-of-the-box configuration.
> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL 
> mechanism against the ApplicationRealm and require users to run the 
> add-user script.
> This would achieve two goals:
> - secure out-of-the-box configuration, which is always a good idea
> - access to the "protected" schema and script caches which is prevented 
> when not on loopback on non-authenticated endpoints.
> 
> Tristan
> -- 
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Executing server tasks that contain POJOs

2017-04-07 Thread Galder Zamarreño
FYI, I've created these JIRAs to improve some of the issues highlighted here:
https://issues.jboss.org/browse/ISPN-7710
https://issues.jboss.org/browse/ISPN-7711

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 7 Apr 2017, at 14:20, Galder Zamarreño  wrote:
> 
> 
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
>> On 30 Mar 2017, at 18:33, Dan Berindei  wrote:
>> 
>> On Thu, Mar 30, 2017 at 3:51 PM, Galder Zamarreño  wrote:
>>> Hi all,
>>> 
>>> For a demo I'm giving next week, I'd like to show how to use distributed 
>>> streams via a remote server task. All server tasks that we have in 
>>> testsuite rely on primitives but in my case I wanted to use POJOs.
>>> 
>>> To do that, I needed to get compatibility mode working in such way that 
>>> those POJOs could be unmarshalled for the server task. Since in another 
>>> demo I'm showing Protostream based POJOs, I thought I'd try to use that as 
>>> mechanism to unmarshall POJOs server side.
>>> 
>>> We have a test for such scenario [1], but the reality (running on a proper 
>>> server) is anything that simple. Here's a list of things I've found out 
>>> while creating a WordCount example that relies on a POJO:
>>> 
>>> 1. Out of the box, it's impossible to set compatibility marshaller to 
>>> org.infinispan.query.remote.CompatibilityProtoStreamMarshaller [1] because 
>>> "org.infinispan.main" classloader can't access that class. I worked around 
>>> that by tweaking the module.xml to have an optional dependency to 
>>> "org.infinispan.remote-query.server" module.
>>> 
>> 
>> I know Sanne also wanted to add one of the query modules as an
>> optional dependency to the core module for a similar reason, but it
>> seems really hacky.
> 
> Yeah, might be a bit hacky but it's just a configuration change as opposed to 
> a code change.
> 
>> Could the server create the GlobalConfigurationBuilder with a
>> classloader that has access to the query module instead?
> 
> Hmmm, not sure. I mean, in theory you could plug any marshaller, even 
> com.acme.WhateverMarshaller, so it's not really an issue about having access 
> to the query module, but having access a module that contains that marshaller.
> 
>> Alternatively, I know ModularClassResolver prefixes class names with
>> the slot and module name, and can load a class from any module. Maybe
>> we could also allow a slot:module:class format everywhere the
>> configuration currently accepts a class name?
> 
> That could be handy. I'm about to create a JIRA on this, so I'll add it as an 
> idea to it.
> 
>> 
>>> 2. After doing that, I had to register the protofile and associated classes 
>>> remotely in the server. Again, there's no out of the box mechanism for 
>>> that, so I created a remote server task that would do that [3].
>>> 
>>> 3. Finally, with all that in place, I was able to complete the WordCount 
>>> test [4] with a final caveat: the return of the word count, and words 
>>> protofile registration, tasks return objects that are not marshalled by the 
>>> compatibility marshaller, so I had to make sure that the remote cache 
>>> manager used for those tasks uses the default marshaller.
>>> 
>>> Clearly we need to improve on this, and we have plans to address these 
>>> issues (with new upcoming transcoding capabilities), but I thought it'd be 
>>> worth mentioning the problems found in case anyone else encounters them 
>>> before transcoding is in place.
>>> 
>>> Cheers,
>>> 
>>> [1] 
>>> https://github.com/galderz/datagrid-patterns/blob/master/server-config/domain/domain.xml#L139
>>> [2] 
>>> https://github.com/galderz/datagrid-patterns/blob/master/server-config/org.infinispan.main_module.xml#L18
>>> [3] 
>>> https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-server/src/main/java/test/WordsProtoTask.java
>>> [4] 
>>> https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-client/src/test/java/test/WordCountTest.java
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>> 
>>> 
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Executing server tasks that contain POJOs

2017-04-07 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 30 Mar 2017, at 18:33, Dan Berindei  wrote:
> 
> On Thu, Mar 30, 2017 at 3:51 PM, Galder Zamarreño  wrote:
>> Hi all,
>> 
>> For a demo I'm giving next week, I'd like to show how to use distributed 
>> streams via a remote server task. All server tasks that we have in testsuite 
>> rely on primitives but in my case I wanted to use POJOs.
>> 
>> To do that, I needed to get compatibility mode working in such way that 
>> those POJOs could be unmarshalled for the server task. Since in another demo 
>> I'm showing Protostream based POJOs, I thought I'd try to use that as 
>> mechanism to unmarshall POJOs server side.
>> 
>> We have a test for such scenario [1], but the reality (running on a proper 
>> server) is anything that simple. Here's a list of things I've found out 
>> while creating a WordCount example that relies on a POJO:
>> 
>> 1. Out of the box, it's impossible to set compatibility marshaller to 
>> org.infinispan.query.remote.CompatibilityProtoStreamMarshaller [1] because 
>> "org.infinispan.main" classloader can't access that class. I worked around 
>> that by tweaking the module.xml to have an optional dependency to 
>> "org.infinispan.remote-query.server" module.
>> 
> 
> I know Sanne also wanted to add one of the query modules as an
> optional dependency to the core module for a similar reason, but it
> seems really hacky.

Yeah, might be a bit hacky but it's just a configuration change as opposed to a 
code change.

> Could the server create the GlobalConfigurationBuilder with a
> classloader that has access to the query module instead?

Hmmm, not sure. I mean, in theory you could plug any marshaller, even 
com.acme.WhateverMarshaller, so it's not really an issue about having access to 
the query module, but having access a module that contains that marshaller.

> Alternatively, I know ModularClassResolver prefixes class names with
> the slot and module name, and can load a class from any module. Maybe
> we could also allow a slot:module:class format everywhere the
> configuration currently accepts a class name?

That could be handy. I'm about to create a JIRA on this, so I'll add it as an 
idea to it.

> 
>> 2. After doing that, I had to register the protofile and associated classes 
>> remotely in the server. Again, there's no out of the box mechanism for that, 
>> so I created a remote server task that would do that [3].
>> 
>> 3. Finally, with all that in place, I was able to complete the WordCount 
>> test [4] with a final caveat: the return of the word count, and words 
>> protofile registration, tasks return objects that are not marshalled by the 
>> compatibility marshaller, so I had to make sure that the remote cache 
>> manager used for those tasks uses the default marshaller.
>> 
>> Clearly we need to improve on this, and we have plans to address these 
>> issues (with new upcoming transcoding capabilities), but I thought it'd be 
>> worth mentioning the problems found in case anyone else encounters them 
>> before transcoding is in place.
>> 
>> Cheers,
>> 
>> [1] 
>> https://github.com/galderz/datagrid-patterns/blob/master/server-config/domain/domain.xml#L139
>> [2] 
>> https://github.com/galderz/datagrid-patterns/blob/master/server-config/org.infinispan.main_module.xml#L18
>> [3] 
>> https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-server/src/main/java/test/WordsProtoTask.java
>> [4] 
>> https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-client/src/test/java/test/WordCountTest.java
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] In Memory Data Grid Patterns Demos from Devoxx France!

2017-04-07 Thread Galder Zamarreño
Hi all,

I've just got back from Devoxx France where Emmanuel and I presented about 
in-memory data grid use cases, and during this talk we presented a couple of 
demos on using Infinispan for offline analytics and real-time data processing.

I've just created a new blog post with some very quick instructions for you to 
run these demos:
http://blog.infinispan.org/2017/04/in-memory-data-grid-patterns-demos-from.html

Give them a try and let us know what you think!

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Proto file for indexed and non-indexed use case?

2017-04-04 Thread Galder Zamarreño
The cache for the second use case is already non-indexed. Is that enough to 
make sure the annotations are ignored?

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 3 Apr 2017, at 18:58, Sanne Grinovero  wrote:
> 
> Hi Galder,
> 
> did you consider using a non-indexed cache for the second case?
> 
> Thanks,
> Sanne
> 
> 
> On 3 April 2017 at 16:44, Galder Zamarreño  wrote:
>> Hi Adrian,
>> 
>> I had a question regarding proto files. I have a single domain of objects 
>> that I want to use for two different use cases.
>> 
>> In the first use case, I want the proto files to be indexed so I define the 
>> comments and related @Indexed/@Field...etc annotations.
>> 
>> In the second use case, I'm merely using proto files as way to achieve 
>> compatibility mode, and I don't want any indexing to be done at all (cache 
>> is distributed with only compatibility and protostream marshaller enabled).
>> 
>> Do I need a separate .proto file for this second use case where I remove the 
>> commented sections that enable indexing? Or can I use the one for the first 
>> use case? I really want to avoid any indexing happening in the second use 
>> case since it'd slow down things for no reason.
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Proto file for indexed and non-indexed use case?

2017-04-03 Thread Galder Zamarreño
Hi Adrian,

I had a question regarding proto files. I have a single domain of objects that 
I want to use for two different use cases. 

In the first use case, I want the proto files to be indexed so I define the 
comments and related @Indexed/@Field...etc annotations. 

In the second use case, I'm merely using proto files as way to achieve 
compatibility mode, and I don't want any indexing to be done at all (cache is 
distributed with only compatibility and protostream marshaller enabled).

Do I need a separate .proto file for this second use case where I remove the 
commented sections that enable indexing? Or can I use the one for the first use 
case? I really want to avoid any indexing happening in the second use case 
since it'd slow down things for no reason.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Executing server tasks that contain POJOs

2017-03-30 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 30 Mar 2017, at 17:15, Gustavo Fernandes  wrote:
> 
> 
> 
> On Thu, Mar 30, 2017 at 1:51 PM, Galder Zamarreño  wrote:
> Hi all,
> 
> For a demo I'm giving next week, I'd like to show how to use distributed 
> streams via a remote server task. All server tasks that we have in testsuite 
> rely on primitives but in my case I wanted to use POJOs.
> 
> To do that, I needed to get compatibility mode working in such way that those 
> POJOs could be unmarshalled for the server task. Since in another demo I'm 
> showing Protostream based POJOs, I thought I'd try to use that as mechanism 
> to unmarshall POJOs server side.
> 
> We have a test for such scenario [1], but the reality (running on a proper 
> server) is anything that simple. Here's a list of things I've found out while 
> creating a WordCount example that relies on a POJO:
> 
> 1. Out of the box, it's impossible to set compatibility marshaller to 
> org.infinispan.query.remote.CompatibilityProtoStreamMarshaller [1] because 
> "org.infinispan.main" classloader can't access that class. I worked around 
> that by tweaking the module.xml to have an optional dependency to 
> "org.infinispan.remote-query.server" module.
> 
> 2. After doing that, I had to register the protofile and associated classes 
> remotely in the server. Again, there's no out of the box mechanism for that, 
> so I created a remote server task that would do that [3].
> 
> 
> AFAICT, you should be able to do that doing a PUT in the Protobuf_Metadata 
> cache, which entails having auth enabled. This cache should be REPL_SYNC, so 
> no need to run a server task.

Good point but not so sure it completely removes the need for the task. The 
task does two things:

1. Call ProtobufMetadataManager.registerProtofile, which as you say could be 
swapped with a cache.put on the metadata cache.

2. Call ProtobufMetadataManager.registerMarshaller. This goes deep into 
updating SerializationContextImpl, which seems independent of any replicated 
cache.

In fact, I had originally set up the task to execute in only in one node, but 
when I did that I found that marshallers were not registered in all nodes, so I 
had to execute the task in all nodes.

I guess the task could be just limited to only executing 2.) in all nodes (and 
store the protofile contents by accessing the cache remotely), but I can't see 
how I can avoid the task altogether right now.

> 
>  
> 
> 3. Finally, with all that in place, I was able to complete the WordCount test 
> [4] with a final caveat: the return of the word count, and words protofile 
> registration, tasks return objects that are not marshalled by the 
> compatibility marshaller, so I had to make sure that the remote cache manager 
> used for those tasks uses the default marshaller.
> 
> Clearly we need to improve on this, and we have plans to address these issues 
> (with new upcoming transcoding capabilities), but I thought it'd be worth 
> mentioning the problems found in case anyone else encounters them before 
> transcoding is in place.
> 
> Cheers,
> 
> [1] 
> https://github.com/galderz/datagrid-patterns/blob/master/server-config/domain/domain.xml#L139
> [2] 
> https://github.com/galderz/datagrid-patterns/blob/master/server-config/org.infinispan.main_module.xml#L18
> [3] 
> https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-server/src/main/java/test/WordsProtoTask.java
> [4] 
> https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-client/src/test/java/test/WordCountTest.java
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Executing server tasks that contain POJOs

2017-03-30 Thread Galder Zamarreño
Hey Ramesh,

I don't know your use case very well, so allow me to ask you some qs:

1. When does your marshaller come into play? At the compatibility layer? Or is 
it used as a client marshaller?

1a. If it's at the compatibility layer, why can't you use 
CompatibilityProtoStreamMarshaller?

Your description below makes it sound like your marshaller does similar work to 
CompatibilityProtoStreamMarshaller, hence the questions and see whether it 
could fit your use case.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 30 Mar 2017, at 16:30, Ramesh Reddy  wrote:
> 
> Glader,
> 
> FWIW, I am rewriting Teiid translator to Infinispan, where I needed a 
> portable marshaller that is simply based on .proto file as you describe 
> issues with your step #2. The use of predefined custom java marshaller is not 
> viable in my usecase, as I want to dynamically convert Relational Tables 
> defined in the Teiid into POJO's in Infinispan, and ability to query them. I 
> have written a such marshaller for my usecase you can see testcase at [1]. 
> Basically I capture the metadata from .proto file and use that information in 
> encoding/decoding the protostream, *most* of the needed code is already there 
> in the ProtoStream libraries. 
> 
> BTW, with your Task example you gave me another idea how I can even further 
> enhance the this integration layer in terms of updating multiple POJOs in 
> single call :)
> 
> Ramesh..
> 
> [1] 
> https://github.com/rareddy/infinispan/blob/master/translator-infinispan-hotrod/src/test/java/org/teiid/translator/infinispan/hotrod/TestTeiidTableMarsheller.java
> 
> 
> - Original Message -
>> Hi all,
>> 
>> For a demo I'm giving next week, I'd like to show how to use distributed
>> streams via a remote server task. All server tasks that we have in testsuite
>> rely on primitives but in my case I wanted to use POJOs.
>> 
>> To do that, I needed to get compatibility mode working in such way that those
>> POJOs could be unmarshalled for the server task. Since in another demo I'm
>> showing Protostream based POJOs, I thought I'd try to use that as mechanism
>> to unmarshall POJOs server side.
>> 
>> We have a test for such scenario [1], but the reality (running on a proper
>> server) is anything that simple. Here's a list of things I've found out
>> while creating a WordCount example that relies on a POJO:
>> 
>> 1. Out of the box, it's impossible to set compatibility marshaller to
>> org.infinispan.query.remote.CompatibilityProtoStreamMarshaller [1] because
>> "org.infinispan.main" classloader can't access that class. I worked around
>> that by tweaking the module.xml to have an optional dependency to
>> "org.infinispan.remote-query.server" module.
>> 
>> 2. After doing that, I had to register the protofile and associated classes
>> remotely in the server. Again, there's no out of the box mechanism for that,
>> so I created a remote server task that would do that [3].
>> 
>> 3. Finally, with all that in place, I was able to complete the WordCount test
>> [4] with a final caveat: the return of the word count, and words protofile
>> registration, tasks return objects that are not marshalled by the
>> compatibility marshaller, so I had to make sure that the remote cache
>> manager used for those tasks uses the default marshaller.
>> 
>> Clearly we need to improve on this, and we have plans to address these issues
>> (with new upcoming transcoding capabilities), but I thought it'd be worth
>> mentioning the problems found in case anyone else encounters them before
>> transcoding is in place.
>> 
>> Cheers,
>> 
>> [1]
>> https://github.com/galderz/datagrid-patterns/blob/master/server-config/domain/domain.xml#L139
>> [2]
>> https://github.com/galderz/datagrid-patterns/blob/master/server-config/org.infinispan.main_module.xml#L18
>> [3]
>> https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-server/src/main/java/test/WordsProtoTask.java
>> [4]
>> https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-client/src/test/java/test/WordCountTest.java
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Executing server tasks that contain POJOs

2017-03-30 Thread Galder Zamarreño
Hi all,

For a demo I'm giving next week, I'd like to show how to use distributed 
streams via a remote server task. All server tasks that we have in testsuite 
rely on primitives but in my case I wanted to use POJOs. 

To do that, I needed to get compatibility mode working in such way that those 
POJOs could be unmarshalled for the server task. Since in another demo I'm 
showing Protostream based POJOs, I thought I'd try to use that as mechanism to 
unmarshall POJOs server side.

We have a test for such scenario [1], but the reality (running on a proper 
server) is anything that simple. Here's a list of things I've found out while 
creating a WordCount example that relies on a POJO:

1. Out of the box, it's impossible to set compatibility marshaller to 
org.infinispan.query.remote.CompatibilityProtoStreamMarshaller [1] because 
"org.infinispan.main" classloader can't access that class. I worked around that 
by tweaking the module.xml to have an optional dependency to 
"org.infinispan.remote-query.server" module.

2. After doing that, I had to register the protofile and associated classes 
remotely in the server. Again, there's no out of the box mechanism for that, so 
I created a remote server task that would do that [3].

3. Finally, with all that in place, I was able to complete the WordCount test 
[4] with a final caveat: the return of the word count, and words protofile 
registration, tasks return objects that are not marshalled by the compatibility 
marshaller, so I had to make sure that the remote cache manager used for those 
tasks uses the default marshaller.

Clearly we need to improve on this, and we have plans to address these issues 
(with new upcoming transcoding capabilities), but I thought it'd be worth 
mentioning the problems found in case anyone else encounters them before 
transcoding is in place.

Cheers,

[1] 
https://github.com/galderz/datagrid-patterns/blob/master/server-config/domain/domain.xml#L139
[2] 
https://github.com/galderz/datagrid-patterns/blob/master/server-config/org.infinispan.main_module.xml#L18
[3] 
https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-server/src/main/java/test/WordsProtoTask.java
[4] 
https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream/tasks-client/src/test/java/test/WordCountTest.java
--
Galder Zamarreño
Infinispan, Red Hat


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Stream operations under lock

2017-03-28 Thread Galder Zamarreño
--Galder ZamarreñoInfinispan, Red HatOn 21 Mar 2017, at 18:50, William Burns  wrote:On Tue, Mar 21, 2017 at 1:42 PM William Burns  wrote:On Tue, Mar 21, 2017 at 12:53 PM Radim Vansa  wrote:On 03/21/2017 04:37 PM, William Burns wrote:Some users have expressed the need to have some sort of forEachoperation that is performed where the Consumer is called while holdingthe lock for the given key and subsequently released after theConsumer operation completes.Seconding Dan's question - is that intended to be able to modify theentry? In my opinion, sending a function that will work on theReadWriteEntryView directly to the node is the only reasonable request.I wouldn't like to see blocking operations in there.Hrmm the user can use the FunctionalMap interface for this then it seems? I wonder if this should just be the going in API. I will need to discuss with Galder the semantics of the evalAll/evalMany methods.Actually looking at evalAll it seems it doesn't scale as it keeps all entries in memory at once, so this is only for caches with a limited amount of entries.^ I might be wrong but didn't forEach work this way? I probably looked at that when trying to implement evalAllDue to the nature of how streams work with retries and performing theoperation on the primary owner, this works out quite well with forEachto be done in an efficient way.The problem is that this only really works well with non tx andpessimistic tx. This obviously leaves out optimistic tx, which atfirst I was a little worried about. But after thinking about it more,this prelocking and optimistic tx don't really fit that well togetheranyways. So I am thinking whenever this operation is performed itwould throw an exception not letting the user use this feature inoptimistic transactions.How exactly reading streams interacts with transactions? Does it wrapread entries into context? This would be a scalability issue.It doesn't wrap read entries into the context for that exact reason. It does however use existing entries in the context to override ones in memory/store.I agree that "locking" should not be exposed with optimistic transactions.Yeah I can't find a good way to do this really and it seems to be opposite of what optimistic transactions are.With pessimistic transactions, how do you expect to handle lockingorder? For regular operations, user is responsible for setting up somelocking order in order to not get a deadlock. With pessimistictransaction, it's the cache itself who will order the calls. Also, ifyou lock anything that is read, you just end up locking everything (or,getting a deadlock). If you don't it's the same as issuing the lock andreading again (to check the locked value) - but you'd do that internallyanyway. Therefore, I don't feel well about pessimistic transactions neither.The lock is done per key only for each invocation. There is no ordering as only one is obtained at a time before it goes to the next. If the user then acquires a lock for another key while in the Consumer this could cause a deadlock if the inverse occurs on a different thread/node, but this is on the user. It is the same as it is today really, except we do the read lock for them before invoking their Consumer.Another question is what does the API for this look like. I wasdebating between 3 options myself:1. AdvancedCache.forEachWithLock(BiConsumer>consumer)This require the least amount of changes, however the user can'tcustomize certain parameters that CacheStream currently provides(listed below - big one being filterKeys).2. CacheStream.forEachWithLock(BiConsumer>consumer)This method would only be allowed to be invoked on the Stream if noother intermediate operations were invoked, otherwise an exceptionwould be thrown. This still gives us access to all of the CacheStreammethods that aren't on the Stream interface (ie.sequentialDistribution, parallelDistribution, parallel, sequential,filterKeys, filterKeySegments, distributedBatchSize,disableRehashAware, timeout).For both options, I don't like Cache being passed around. You shouldmodify the CacheEntry (or some kind of view) directly.I don't know for sure if that is sufficient for the user. Sometimes they may modify another Cache given the value in this one for example, which they could access from the CacheManager of that Cache. Maybe Tristan knows more about some use cases.Radim3. LockedStream> AdvancedCache.lockedStream()This requires the most changes, however the API would be the mostexplicit. In this case the LockedStream would only have the methods onit that are able to be invoked as noted above and forEach.I personally feel that #3 might be the cleanest, but obviouslyrequires adding more classes. Let me know what you guys think and ifyou think the optimistic exclusion is acceptable.Thanks, - Will___infinispan-dev mailing listinfinispan-dev@lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev--Radim Vansa JBoss Performance Team___

Re: [infinispan-dev] Branching proposal

2017-03-28 Thread Galder Zamarreño
Nice one-liner. The fact that we always put the JIRA id helps.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 27 Mar 2017, at 14:36, Dan Berindei  wrote:
> 
> I use something like this to check what tags contain a particular fix:
> 
> git tag --contains $(git log --grep  -1 --format="%h" master)
> 
> True, it's a bit longer, but it stays in the bash/zsh history :)
> 
> Cheers
> Dan
> 
> 
> On Mon, Mar 27, 2017 at 1:33 PM, Radim Vansa  wrote:
>> If you can't merge a commit (based on 9.0.x) to master clearly, do you
>> need to file another PR anyway? Then the lag to get some code to master
>> increases a lot. I am not sure how useful is git tag --contains 
>> if you cannot be sure that you'll find all occurrences due to this kind
>> of issues.
>> 
>> R.
>> 
>> On 03/27/2017 11:33 AM, Sebastian Laskawiec wrote:
>>> Hey!
>>> 
>>> We are about to start working on 9.1.x and 9.2.y branches so I would
>>> like to propose alternative merging strategy.
>>> 
>>> Our current workflow looks like this:
>>> 
>>> X - new commit
>>> X` - cherry pick to maintenance branch
>>> --+---+---X- master
>>>  |\--X` 9.2.x
>>>  \---X``--- 9.1.x
>>> 
>>> Each commit needs to be reviewed in master branch and backported to
>>> the maintenance branches. From maintenance perspective this is a bit
>>> painful, since in above example we need to get 3 times through PR
>>> queue. Also it's worth to mention that X is not X` nor X``.
>>> Cherry-picking creates a copy of a commit. This makes some useful
>>> tricks (like git tag --contains ) a bit harder to use. Finally,
>>> this approach allows the codebase to diverge from maintenance branches
>>> very fast (someone might just forget to backport some of the
>>> refactoring stuff).
>>> 
>>> The proposal:
>>> 
>>> X, Y - new commits
>>> / - merge commits
>>> --+-+--//--- master
>>>  |  \/---Y/ 9.2.x
>>>  \-X/-- 9.1.x
>>> 
>>> With the proposal, a developer should always implement a given feature
>>> in the lowest possible maintenance branch. Then we will run a set of
>>> merges from 9.1.x into 9.2.x and finally into master. The biggest
>>> advantage of this approach is that given functionality (identified by
>>> a commit) will have the same SHA1 for all branches. This will allow
>>> all tools like (mentioned before) `git tag --contains ` to work.
>>> There are also some further implications of this approach:
>>> 
>>>  * Merging commits should be performed very often (even automatically
>>>in the night (if merged without any problems)).
>>>  * After releasing each maintenance release, someone will need to do
>>>a merge with strategy `ours` (`git merge -s ours upstream/9.2.x`).
>>>This way we will not have to solve version conflicts in poms.
>>>  * Since there is no nice way to rebase a merge commit, they should
>>>be pushed directly into the master branch (without review, without
>>>CI). After the merge, HEAD will change and CI will
>>>automatically pick the build. Remember, merges should be done very
>>>often. So I assume there won't be any problems most of the times.
>>>  * Finally, with this approach the code diverges slight slower (at
>>>least from my experience). Mainly because we don't need to
>>>remember to cherry-pick individual commits. They are automatically
>>>"taken" by a merge.
>>> 
>>> From my past experience, this strategy works pretty nice and can be
>>> almost fully automated. It significantly lowers the maintenance pain
>>> around cherry-picks. However there is nothing for free, and we would
>>> need to get used to pushing merged directly into master (which is fine
>>> to me but some of you might not like it).
>>> 
>>> Thanks,
>>> Sebastian
>>> 
>>> 
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> 
>> --
>> Radim Vansa 
>> JBoss Performance Team
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Stream operations under lock

2017-03-28 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 22 Mar 2017, at 10:51, Radim Vansa  wrote:
> 
> On 03/21/2017 06:50 PM, William Burns wrote:
>> 
>> 
>> On Tue, Mar 21, 2017 at 1:42 PM William Burns > <mailto:mudokon...@gmail.com>> wrote:
>> 
>>On Tue, Mar 21, 2017 at 12:53 PM Radim Vansa ><mailto:rva...@redhat.com>> wrote:
>> 
>>On 03/21/2017 04:37 PM, William Burns wrote:
>>> Some users have expressed the need to have some sort of forEach
>>> operation that is performed where the Consumer is called
>>while holding
>>> the lock for the given key and subsequently released after the
>>> Consumer operation completes.
>> 
>>Seconding Dan's question - is that intended to be able to
>>modify the
>>entry? In my opinion, sending a function that will work on the
>>ReadWriteEntryView directly to the node is the only reasonable
>>request.
>>I wouldn't like to see blocking operations in there.
>> 
>> 
>>Hrmm the user can use the FunctionalMap interface for this then it
>>seems? I wonder if this should just be the going in API. I will
>>need to discuss with Galder the semantics of the evalAll/evalMany
>>methods.
>> 
>> 
>> Actually looking at evalAll it seems it doesn't scale as it keeps all 
>> entries in memory at once, so this is only for caches with a limited 
>> amount of entries.
> 
> Don't look into the implementation; I think Galder has focused more on 
> the API side than having optimal implementation.

That's why it's marked experimental ;p

> IMO there's no reason 
> evalAll should load all the entries into memory in non-transactional mode.
> 
>> 
>>> 
>>> Due to the nature of how streams work with retries and
>>performing the
>>> operation on the primary owner, this works out quite well
>>with forEach
>>> to be done in an efficient way.
>>> 
>>> The problem is that this only really works well with non tx and
>>> pessimistic tx. This obviously leaves out optimistic tx,
>>which at
>>> first I was a little worried about. But after thinking about
>>it more,
>>> this prelocking and optimistic tx don't really fit that well
>>together
>>> anyways. So I am thinking whenever this operation is
>>performed it
>>> would throw an exception not letting the user use this
>>feature in
>>> optimistic transactions.
>> 
>>How exactly reading streams interacts with transactions? Does
>>it wrap
>>read entries into context? This would be a scalability issue.
>> 
>> 
>>It doesn't wrap read entries into the context for that exact
>>reason. It does however use existing entries in the context to
>>override ones in memory/store.
>> 
> 
> Uuh, so you end up with a copy of the cache in single invocation 
> context, without any means to flush it. I think that we need add 
> InvocationContext.current().forget(key) API (throwing exception if the 
> entry was modified) or something like that, even for the regular 
> streams. Maybe an override for filter methods, too, because you want to 
> pass a nice predicate, but you can't just forget all filtered out entries.
> 
>> 
>>I agree that "locking" should not be exposed with optimistic
>>transactions.
>> 
>> 
>>Yeah I can't find a good way to do this really and it seems to be
>>opposite of what optimistic transactions are.
>> 
>> 
>>With pessimistic transactions, how do you expect to handle locking
>>order? For regular operations, user is responsible for setting
>>up some
>>locking order in order to not get a deadlock. With pessimistic
>>transaction, it's the cache itself who will order the calls.
>>Also, if
>>you lock anything that is read, you just end up locking
>>everything (or,
>>getting a deadlock). If you don't it's the same as issuing the
>>lock and
>>reading again (to check the locked value) - but you'd do that
>>internally
>>anyway. Therefore, I don't feel well about pessimistic
>>transactions neither.
>> 
>> 
>>The lock is done per key only for each invocation. There is no
>>ordering as only one is obtained at a time before it goes to 

Re: [infinispan-dev] Branching proposal

2017-03-28 Thread Galder Zamarreño
Why are we working in 9.1.x, 9.2.x and master in paralell? We normally work on 
master and maybe one more maintenance branch.

Except for occasional tricky backports (e.g. Radim's work) the rest has been 
pretty straightforward for me. Also, the number of backports I work on is low 
in general.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 27 Mar 2017, at 11:33, Sebastian Laskawiec  wrote:
> 
> Hey!
> 
> We are about to start working on 9.1.x and 9.2.y branches so I would like to 
> propose alternative merging strategy.
> 
> Our current workflow looks like this:
> 
> X - new commit
> X` - cherry pick to maintenance branch
> --+---+---X- master
>   |\--X` 9.2.x
>   \---X``--- 9.1.x
> 
> Each commit needs to be reviewed in master branch and backported to the 
> maintenance branches. From maintenance perspective this is a bit painful, 
> since in above example we need to get 3 times through PR queue. Also it's 
> worth to mention that X is not X` nor X``. Cherry-picking creates a copy of a 
> commit. This makes some useful tricks (like git tag --contains ) a bit 
> harder to use. Finally, this approach allows the codebase to diverge from 
> maintenance branches very fast (someone might just forget to backport some of 
> the refactoring stuff).
> 
> The proposal:
> 
> X, Y - new commits
> / - merge commits
> --+-+--//--- master
>   |  \/---Y/ 9.2.x
>   \-X/-- 9.1.x
> 
> With the proposal, a developer should always implement a given feature in the 
> lowest possible maintenance branch. Then we will run a set of merges from 
> 9.1.x into 9.2.x and finally into master. The biggest advantage of this 
> approach is that given functionality (identified by a commit) will have the 
> same SHA1 for all branches. This will allow all tools like (mentioned before) 
> `git tag --contains ` to work. There are also some further implications 
> of this approach:
>   • Merging commits should be performed very often (even automatically in 
> the night (if merged without any problems)).
>   • After releasing each maintenance release, someone will need to do a 
> merge with strategy `ours` (`git merge -s ours upstream/9.2.x`). This way we 
> will not have to solve version conflicts in poms.
>   • Since there is no nice way to rebase a merge commit, they should be 
> pushed directly into the master branch (without review, without CI). After 
> the merge, HEAD will change and CI will automatically pick the build. 
> Remember, merges should be done very often. So I assume there won't be any 
> problems most of the times.
>   • Finally, with this approach the code diverges slight slower (at least 
> from my experience). Mainly because we don't need to remember to cherry-pick 
> individual commits. They are automatically "taken" by a merge.
> From my past experience, this strategy works pretty nice and can be almost 
> fully automated. It significantly lowers the maintenance pain around 
> cherry-picks. However there is nothing for free, and we would need to get 
> used to pushing merged directly into master (which is fine to me but some of 
> you might not like it).
> 
> Thanks,
> Sebastian
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Stream operations under lock

2017-03-27 Thread Galder Zamarreño

--
Galder Zamarreño
Infinispan, Red Hat

> On 21 Mar 2017, at 17:16, Dan Berindei  wrote:
> 
> I'm leaning towards option 1.
> 
> Are you thinking about also allowing the consumer to modify the entry,
> like JCache's EntryProcessors? For a consumer that can only modify the
> current entry, we could even "emulate" locking in an optimistic cache
> by catching the WriteSkewException and running the consumer again.
> 
> I wouldn't allow this to be mixed with other operations in a stream,
> because then you may have to run filters/mappers/sorting while holding
> the lock as well.

^ Would forEach w/ lock still run for all entries in originator? If so, not 
being able to filter could be a pain. IOW, you'd be forcing all entries to be 
shipped to a node and user to do its own filtering. Not ideal :\


> 
> Cheers
> Dan
> 
> 
> On Tue, Mar 21, 2017 at 5:37 PM, William Burns  wrote:
>> Some users have expressed the need to have some sort of forEach operation
>> that is performed where the Consumer is called while holding the lock for
>> the given key and subsequently released after the Consumer operation
>> completes.
>> 
>> Due to the nature of how streams work with retries and performing the
>> operation on the primary owner, this works out quite well with forEach to be
>> done in an efficient way.
>> 
>> The problem is that this only really works well with non tx and pessimistic
>> tx. This obviously leaves out optimistic tx, which at first I was a little
>> worried about. But after thinking about it more, this prelocking and
>> optimistic tx don't really fit that well together anyways. So I am thinking
>> whenever this operation is performed it would throw an exception not letting
>> the user use this feature in optimistic transactions.
>> 
>> Another question is what does the API for this look like. I was debating
>> between 3 options myself:
>> 
>> 1. AdvancedCache.forEachWithLock(BiConsumer>
>> consumer)
>> 
>> This require the least amount of changes, however the user can't customize
>> certain parameters that CacheStream currently provides (listed below - big
>> one being filterKeys).
>> 
>> 2. CacheStream.forEachWithLock(BiConsumer> consumer)
>> 
>> This method would only be allowed to be invoked on the Stream if no other
>> intermediate operations were invoked, otherwise an exception would be
>> thrown. This still gives us access to all of the CacheStream methods that
>> aren't on the Stream interface (ie. sequentialDistribution,
>> parallelDistribution, parallel, sequential, filterKeys, filterKeySegments,
>> distributedBatchSize, disableRehashAware, timeout).
>> 
>> 3. LockedStream> AdvancedCache.lockedStream()
>> 
>> This requires the most changes, however the API would be the most explicit.
>> In this case the LockedStream would only have the methods on it that are
>> able to be invoked as noted above and forEach.
>> 
>> I personally feel that #3 might be the cleanest, but obviously requires
>> adding more classes. Let me know what you guys think and if you think the
>> optimistic exclusion is acceptable.
>> 
>> Thanks,
>> 
>> - Will
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] NoSuchMethodError in Spark simple tutorial

2017-03-27 Thread Galder Zamarreño
Ok, I think I know what I was doing wrong. I was trying to use the spark 
tutorial on its own, without the parent dependencies. As a result of that, the 
Infinispan Client being resolved was Alpha4.

In the simple tutorials, the parent pom defines an Infinispan dependency that's 
newer and the dependency resolution ends up picking up the newer version and 
hence it works.

I don't think this should be the case, the spark tutorial shouldn't be relying 
on this kind of mixed dependencies resolution.  That's why I think there should 
have been a Spark release that fixes the Alpha4 problem.

On top of that, I find tutorials should be runnable on their own, without 
parent dependencies. This might make tutorial poms bigger or more verbose but I 
think it helps with users when trying to run each individually outside of the 
simple tutorials repo.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 24 Mar 2017, at 02:31, Gustavo Fernandes  wrote:
> 
> This was fixed [1]
> 
> Regarding the tutorial, I could run it "as is":
> 
> git clone https://github.com/infinispan/infinispan-simple-tutorials.git
> cd infinispan-simple-tutorials
> mvn clean install
> cd spark
> mvn exec:exec
> 
> 
> How did you reproduced the issue?
> 
> 
> [1] https://issues.jboss.org/browse/ISPRK-33
> 
> 
> Thanks,
> Gustavo
> 
> On Thu, Mar 23, 2017 at 1:40 PM, Galder Zamarreño  wrote:
> Hey guys,
> 
> The Spark Java simple tutorial [1] tutorial does not work as is due to some 
> dependencies mix up.
> 
> If you run that class, with the dependencies defined in the pom.xml, you get:
> 
> java.lang.NoSuchMethodError: 
> org.infinispan.client.hotrod.logging.Log.tracef(Ljava/lang/Throwable;Ljava/lang/String;II)V
> 
> The dependencies are defined as:
> 
> 
>   org.infinispan
>   infinispan-spark_2.11
>   0.4
> 
> ...
> 
> Looking at the dependency tree, I see this:
> 
> [INFO] +- org.infinispan:infinispan-spark_2.11:jar:0.4:compile
> [INFO] |  +- org.infinispan:infinispan-client-hotrod:jar:9.0.0.Alpha4:compile
> ...
> [INFO] |  +- org.infinispan.protostream:protostream:jar:3.0.5.Final:compile
> [INFO] |  |  +- org.jboss.logging:jboss-logging:jar:3.1.4.GA:compile
> 
> That logging jar seems an old one, it should be 3.3.x. I worked around this 
> by doing:
> 
> 
>   org.infinispan
>   infinispan-spark_2.11
>   ${version.spark-connector}
>   
> 
>   org.jboss.logging
>   jboss-logging
> 
>   
> 
> 
>   org.jboss.logging
>   jboss-logging
>   ${version.jboss.logging}
> 
> 
> @Adrian Are the dependencies of the latest protostream versions in line with 
> the infinispan hot rod client ones?
> 
> @Gustavo, Once there's a client version which depends on a protostream 
> version that fixes this (if there's not one already...), can you release a 
> 0.5 alpha/beta/cr version?
> 
> Cheers,
> 
> [1] http://infinispan.org/tutorials/simple/spark/
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

  1   2   3   4   5   6   7   8   9   10   >