Re: HTTP Session Clustering

2019-04-02 Thread Oleg Cohen
Another quick update:

Adding this to my pom.xml:

*

solved the issue. So, I am missing an import, just not sure which one.

Here are my imports:


org.apache.logging.log4j;version='[2.8.0,3.0.0)';provider=paxlogging,

org.apache.logging.log4j.message;version='[2.8.0,3.0.0)';provider=paxlogging,

org.apache.logging.log4j.spi;version='[2.8.0,3.0.0)';provider=paxlogging,

org.apache.logging.log4j.util;version='[2.8.0,3.0.0)';provider=paxlogging,


org.slf4j;version='[1.7.0,1.8.0)';provider=paxlogging,


javax.servlet;version='[2.5,3.2)',

javax.servlet.http;version='[2.5,3.2)',
javax.servlet.jsp,

javax.servlet.jsp.tagext,

com.hazelcast.web


Thank you,
Oleg

> On Apr 2, 2019, at 6:02 AM, Oleg Cohen  wrote:
> 
> Hi JB,
> 
> Wanted to provide a quick update. My ultimate environment is pretty complex 
> with a lot of bundles and XML-related components. To eliminate these 
> dependencies I started with a plain vanilla karaf, installed cellar, and 
> built a very simple WAB.
> 
> The behavior is different, it looks better, but I have a different issue now. 
> When the application start and session clustering is initialized I get the 
> following exception that persists and keeps being thrown:
> 
> 2019-04-02T05:59:49,475 | ERROR | hz._hzInstance_1_cellar.IO.thread-in-1 | 
> NodeEngine   | 111 - com.hazelcast - 3.9.1 | 
> [127.0.0.1]:5702 [cellar] [3.9.1] Failed to process:Packet{partitionId=-1, 
> conn=Connection[id=2, /127.0.0.1:5702->/127.0.0.1:64100, endpoint=null, 
> alive=true, type=MEMBER], rawFlags=10, isUrgent=false, packetType=BIND, 
> typeSpecificFlags=}
> com.hazelcast.nio.serialization.HazelcastSerializationException: No 
> DataSerializerFactory registered for namespace: 0
>   at 
> com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:137)
>  ~[111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:105)
>  ~[111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:50)
>  ~[111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:48)
>  ~[111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:185)
>  ~[111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.nio.tcp.TcpIpConnectionManager.handle(TcpIpConnectionManager.java:213)
>  ~[111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.spi.impl.NodeEngineImpl$ConnectionManagerPacketHandler.handle(NodeEngineImpl.java:199)
>  ~[111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.spi.impl.PacketDispatcher.handle(PacketDispatcher.java:73) 
> [111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.nio.tcp.MemberChannelInboundHandler.handlePacket(MemberChannelInboundHandler.java:71)
>  [111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.nio.tcp.MemberChannelInboundHandler.onRead(MemberChannelInboundHandler.java:54)
>  [111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.internal.networking.nio.NioChannelReader.handle(NioChannelReader.java:138)
>  [111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.internal.networking.nio.NioThread.handleSelectionKey(NioThread.java:401)
>  [111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.internal.networking.nio.NioThread.handleSelectionKeys(NioThread.java:386)
>  [111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:293)
>  [111:com.hazelcast:3.9.1]
>   at 
> com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:248) 
> [111:com.hazelcast:3.9.1]
> 
> Let me know if you have any thoughts on what is happening.
> 
> I will investigate separately on what in my bundles might have been cuasing 
> the original exception.
> 
> Thank you,
> Oleg
> 
>> On Apr 1, 2019, at 9:57 AM, Jean-Baptiste Onofré > > wrote:
>> 
>> Thanks for the update.
>> 
>> It could be related to the karaf xml spec we added in lib/endorsed.
>> 
>> Let me check and try to reproduce.
>> 
>> Regards
>> JB
>> 
>> On 01/04/2019 15:47, Oleg Cohen wrote:
>>> Hi JB,
>>> 
>>> Thank you for replying!
>>> 
>>> Karaf: 

Re: HTTP Session Clustering

2019-04-02 Thread Oleg Cohen
Hi JB,

Wanted to provide a quick update. My ultimate environment is pretty complex 
with a lot of bundles and XML-related components. To eliminate these 
dependencies I started with a plain vanilla karaf, installed cellar, and built 
a very simple WAB.

The behavior is different, it looks better, but I have a different issue now. 
When the application start and session clustering is initialized I get the 
following exception that persists and keeps being thrown:

2019-04-02T05:59:49,475 | ERROR | hz._hzInstance_1_cellar.IO.thread-in-1 | 
NodeEngine   | 111 - com.hazelcast - 3.9.1 | 
[127.0.0.1]:5702 [cellar] [3.9.1] Failed to process:Packet{partitionId=-1, 
conn=Connection[id=2, /127.0.0.1:5702->/127.0.0.1:64100, endpoint=null, 
alive=true, type=MEMBER], rawFlags=10, isUrgent=false, packetType=BIND, 
typeSpecificFlags=}
com.hazelcast.nio.serialization.HazelcastSerializationException: No 
DataSerializerFactory registered for namespace: 0
at 
com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:137)
 ~[111:com.hazelcast:3.9.1]
at 
com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:105)
 ~[111:com.hazelcast:3.9.1]
at 
com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:50)
 ~[111:com.hazelcast:3.9.1]
at 
com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:48)
 ~[111:com.hazelcast:3.9.1]
at 
com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:185)
 ~[111:com.hazelcast:3.9.1]
at 
com.hazelcast.nio.tcp.TcpIpConnectionManager.handle(TcpIpConnectionManager.java:213)
 ~[111:com.hazelcast:3.9.1]
at 
com.hazelcast.spi.impl.NodeEngineImpl$ConnectionManagerPacketHandler.handle(NodeEngineImpl.java:199)
 ~[111:com.hazelcast:3.9.1]
at 
com.hazelcast.spi.impl.PacketDispatcher.handle(PacketDispatcher.java:73) 
[111:com.hazelcast:3.9.1]
at 
com.hazelcast.nio.tcp.MemberChannelInboundHandler.handlePacket(MemberChannelInboundHandler.java:71)
 [111:com.hazelcast:3.9.1]
at 
com.hazelcast.nio.tcp.MemberChannelInboundHandler.onRead(MemberChannelInboundHandler.java:54)
 [111:com.hazelcast:3.9.1]
at 
com.hazelcast.internal.networking.nio.NioChannelReader.handle(NioChannelReader.java:138)
 [111:com.hazelcast:3.9.1]
at 
com.hazelcast.internal.networking.nio.NioThread.handleSelectionKey(NioThread.java:401)
 [111:com.hazelcast:3.9.1]
at 
com.hazelcast.internal.networking.nio.NioThread.handleSelectionKeys(NioThread.java:386)
 [111:com.hazelcast:3.9.1]
at 
com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:293) 
[111:com.hazelcast:3.9.1]
at 
com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:248) 
[111:com.hazelcast:3.9.1]

Let me know if you have any thoughts on what is happening.

I will investigate separately on what in my bundles might have been cuasing the 
original exception.

Thank you,
Oleg

> On Apr 1, 2019, at 9:57 AM, Jean-Baptiste Onofré  wrote:
> 
> Thanks for the update.
> 
> It could be related to the karaf xml spec we added in lib/endorsed.
> 
> Let me check and try to reproduce.
> 
> Regards
> JB
> 
> On 01/04/2019 15:47, Oleg Cohen wrote:
>> Hi JB,
>> 
>> Thank you for replying!
>> 
>> Karaf: 4.2.4
>> JDK: 8u202
>> 
>> Standard distribution. I have a feature that has the required features
>> in the doc listed as dependencies:
>> 
>> 
>> http
>> http-whiteboard
>> 
>> cellar
>> 
>> Thank you,
>> Oleg
>> 
>> 
>> 
>> 
>>> On Apr 1, 2019, at 9:40 AM, Jean-Baptiste Onofré >> >> wrote:
>>> 
>>> Hi Oleg,
>>> 
>>> Is cellar feature installed correctly (providing the hazelcast instance) ?
>>> 
>>> What Karaf version are you using ? Is it a custom distro ?
>>> 
>>> Regards
>>> JB
>>> 
>>> On 01/04/2019 15:37, Oleg Cohen wrote:
 Greetings,
 
 I wonder if anybody ran into a similar issue. I followed the setup
 instructions
 here: 
 https://karaf.apache.org/manual/cellar/latest-4/#_enable_cluster_http_session_replication
 
 Now that Karaf runs I am seeing this exception:
 
 2019-04-01T09:31:09,489 | INFO  | .hazelcast-wm.ensureInstance |
 ClusteredSessionService  | 68 - com.hazelcast - 3.9.1 | Retrying
 the connection!!
 2019-04-01T09:31:09,490 | INFO  | .hazelcast-wm.ensureInstance |
 HazelcastInstanceLoader  | 68 - com.hazelcast - 3.9.1 | Creating
 a new HazelcastInstance for session replication
 2019-04-01T09:31:09,492 | WARN  | .hazelcast-wm.ensureInstance |
 ClusteredSessionService  | 68 - com.hazelcast - 3.9.1 | Cannot
 connect to Hazelcast server: XPathFactory#newInstance() failed to create
 an XPathFactory for the default 

Re: HTTP Session Clustering

2019-04-01 Thread Jean-Baptiste Onofré
Thanks for the update.

It could be related to the karaf xml spec we added in lib/endorsed.

Let me check and try to reproduce.

Regards
JB

On 01/04/2019 15:47, Oleg Cohen wrote:
> Hi JB,
> 
> Thank you for replying!
> 
> Karaf: 4.2.4
> JDK: 8u202
> 
> Standard distribution. I have a feature that has the required features
> in the doc listed as dependencies:
> 
> 
> http
> http-whiteboard
> 
> cellar
> 
> Thank you,
> Oleg
> 
> 
> 
> 
>> On Apr 1, 2019, at 9:40 AM, Jean-Baptiste Onofré > > wrote:
>>
>> Hi Oleg,
>>
>> Is cellar feature installed correctly (providing the hazelcast instance) ?
>>
>> What Karaf version are you using ? Is it a custom distro ?
>>
>> Regards
>> JB
>>
>> On 01/04/2019 15:37, Oleg Cohen wrote:
>>> Greetings,
>>>
>>> I wonder if anybody ran into a similar issue. I followed the setup
>>> instructions
>>> here: 
>>> https://karaf.apache.org/manual/cellar/latest-4/#_enable_cluster_http_session_replication
>>>
>>> Now that Karaf runs I am seeing this exception:
>>>
>>> 2019-04-01T09:31:09,489 | INFO  | .hazelcast-wm.ensureInstance |
>>> ClusteredSessionService          | 68 - com.hazelcast - 3.9.1 | Retrying
>>> the connection!!
>>> 2019-04-01T09:31:09,490 | INFO  | .hazelcast-wm.ensureInstance |
>>> HazelcastInstanceLoader          | 68 - com.hazelcast - 3.9.1 | Creating
>>> a new HazelcastInstance for session replication
>>> 2019-04-01T09:31:09,492 | WARN  | .hazelcast-wm.ensureInstance |
>>> ClusteredSessionService          | 68 - com.hazelcast - 3.9.1 | Cannot
>>> connect to Hazelcast server: XPathFactory#newInstance() failed to create
>>> an XPathFactory for the default object model:
>>> http://java.sun.com/jaxp/xpath/dom with the
>>> XPathFactoryConfigurationException: java.util.ServiceConfigurationError:
>>> javax.xml.xpath.XPathFactory: Provider
>>> org.apache.xpath.jaxp.XPathFactoryImpl not found
>>>
>>> I would appreciate suggestions on how to fix this.
>>>
>>> Thank you,
>>> Oleg
>>
>> -- 
>> Jean-Baptiste Onofré
>> jbono...@apache.org 
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
> 

-- 
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: HTTP Session Clustering

2019-04-01 Thread Oleg Cohen
Hi JB,

Thank you for replying!

Karaf: 4.2.4
JDK: 8u202

Standard distribution. I have a feature that has the required features in the 
doc listed as dependencies:


http
http-whiteboard

cellar

Thank you,
Oleg




> On Apr 1, 2019, at 9:40 AM, Jean-Baptiste Onofré  wrote:
> 
> Hi Oleg,
> 
> Is cellar feature installed correctly (providing the hazelcast instance) ?
> 
> What Karaf version are you using ? Is it a custom distro ?
> 
> Regards
> JB
> 
> On 01/04/2019 15:37, Oleg Cohen wrote:
>> Greetings,
>> 
>> I wonder if anybody ran into a similar issue. I followed the setup
>> instructions
>> here: 
>> https://karaf.apache.org/manual/cellar/latest-4/#_enable_cluster_http_session_replication
>> 
>> Now that Karaf runs I am seeing this exception:
>> 
>> 2019-04-01T09:31:09,489 | INFO  | .hazelcast-wm.ensureInstance |
>> ClusteredSessionService  | 68 - com.hazelcast - 3.9.1 | Retrying
>> the connection!!
>> 2019-04-01T09:31:09,490 | INFO  | .hazelcast-wm.ensureInstance |
>> HazelcastInstanceLoader  | 68 - com.hazelcast - 3.9.1 | Creating
>> a new HazelcastInstance for session replication
>> 2019-04-01T09:31:09,492 | WARN  | .hazelcast-wm.ensureInstance |
>> ClusteredSessionService  | 68 - com.hazelcast - 3.9.1 | Cannot
>> connect to Hazelcast server: XPathFactory#newInstance() failed to create
>> an XPathFactory for the default object model:
>> http://java.sun.com/jaxp/xpath/dom with the
>> XPathFactoryConfigurationException: java.util.ServiceConfigurationError:
>> javax.xml.xpath.XPathFactory: Provider
>> org.apache.xpath.jaxp.XPathFactoryImpl not found
>> 
>> I would appreciate suggestions on how to fix this.
>> 
>> Thank you,
>> Oleg
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com



Re: HTTP Session Clustering

2019-04-01 Thread Jean-Baptiste Onofré
Hi Oleg,

Is cellar feature installed correctly (providing the hazelcast instance) ?

What Karaf version are you using ? Is it a custom distro ?

Regards
JB

On 01/04/2019 15:37, Oleg Cohen wrote:
> Greetings,
> 
> I wonder if anybody ran into a similar issue. I followed the setup
> instructions
> here: 
> https://karaf.apache.org/manual/cellar/latest-4/#_enable_cluster_http_session_replication
> 
> Now that Karaf runs I am seeing this exception:
> 
> 2019-04-01T09:31:09,489 | INFO  | .hazelcast-wm.ensureInstance |
> ClusteredSessionService          | 68 - com.hazelcast - 3.9.1 | Retrying
> the connection!!
> 2019-04-01T09:31:09,490 | INFO  | .hazelcast-wm.ensureInstance |
> HazelcastInstanceLoader          | 68 - com.hazelcast - 3.9.1 | Creating
> a new HazelcastInstance for session replication
> 2019-04-01T09:31:09,492 | WARN  | .hazelcast-wm.ensureInstance |
> ClusteredSessionService          | 68 - com.hazelcast - 3.9.1 | Cannot
> connect to Hazelcast server: XPathFactory#newInstance() failed to create
> an XPathFactory for the default object model:
> http://java.sun.com/jaxp/xpath/dom with the
> XPathFactoryConfigurationException: java.util.ServiceConfigurationError:
> javax.xml.xpath.XPathFactory: Provider
> org.apache.xpath.jaxp.XPathFactoryImpl not found
> 
> I would appreciate suggestions on how to fix this.
> 
> Thank you,
> Oleg

-- 
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


HTTP Session Clustering

2019-04-01 Thread Oleg Cohen
Greetings,

I wonder if anybody ran into a similar issue. I followed the setup instructions 
here: 
https://karaf.apache.org/manual/cellar/latest-4/#_enable_cluster_http_session_replication
 


Now that Karaf runs I am seeing this exception:

2019-04-01T09:31:09,489 | INFO  | .hazelcast-wm.ensureInstance | 
ClusteredSessionService  | 68 - com.hazelcast - 3.9.1 | Retrying the 
connection!!
2019-04-01T09:31:09,490 | INFO  | .hazelcast-wm.ensureInstance | 
HazelcastInstanceLoader  | 68 - com.hazelcast - 3.9.1 | Creating a new 
HazelcastInstance for session replication
2019-04-01T09:31:09,492 | WARN  | .hazelcast-wm.ensureInstance | 
ClusteredSessionService  | 68 - com.hazelcast - 3.9.1 | Cannot connect 
to Hazelcast server: XPathFactory#newInstance() failed to create an 
XPathFactory for the default object model: http://java.sun.com/jaxp/xpath/dom 
with the XPathFactoryConfigurationException: 
java.util.ServiceConfigurationError: javax.xml.xpath.XPathFactory: Provider 
org.apache.xpath.jaxp.XPathFactoryImpl not found

I would appreciate suggestions on how to fix this.

Thank you,
Oleg