Re: Use IgniteQueue in client

2017-11-13 Thread Evgeniy Stanilovskiy

ticket https://issues.apache.org/jira/browse/IGNITE-6437 already created


Hi,

I want to create an IgniteQueue on a server node and get a reference to  
this
queue on a client node. I get a null pointer exception on client when I  
do

this.

This has been asked before in a very old thread and is reportedly fixed  
as
per  this issue   .  
But I

am getting the same behaviour using both ignite version 2.2 and 2.3 . Can
someone aware of the development in this regard please update me with the
status of this? Is this a reasonable expectation? Or are there other  
ways to

achieve this?

Thanks,
Arun



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: No heap cleanup on oldest node

2017-11-13 Thread vkulichenko
You should provide more information. What is your configuration? How do you
use the cluster (data, compute, services, ...)? What is consuming memory?
Did you try to analyze GC logs and heap dumps?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Spring xml for connecting to Oracle database

2017-11-13 Thread vkulichenko
Easiest way is to generate all required configuration using Web Console tool:
https://apacheignite-tools.readme.io/docs/automatic-rdbms-integration

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Error serialising arrays using Ignite 2.2 C# client

2017-11-13 Thread Raymond Wilson
Thanks Alexey.

-Original Message-
From: Alexey Popov [mailto:tank2.a...@gmail.com]
Sent: Tuesday, November 14, 2017 5:09 AM
To: user@ignite.apache.org
Subject: Re: Error serialising arrays using Ignite 2.2 C# client

Hi Raymond,

You are right. True multidimensional arrays are not supported now in
binary serialized (C#).
Jugged arrays work fine. So, you can use them or just one-dimensional
array with 2D-index calculation.

Anyway, I opened a ticket:
https://issues.apache.org/jira/browse/IGNITE-6896
You can track a progress on this issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Spring xml for connecting to Oracle database

2017-11-13 Thread Ganesh Kumar
Hi,
Do we have a sample of the spring xml file to connect to Oracle and perform
a "Read-through" operation?

Ganesh



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


No heap cleanup on oldest node

2017-11-13 Thread dark
Hi, there

10 Ignite Cluster were configured. 

When one of these nodes increased the CPU utilization, the Heap continued to
grow after the Minor GC. Then he died after Full GC.

What is the cause?

:(



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Use IgniteQueue in client

2017-11-13 Thread arunkjn
Thanks Alexey.

I think the problem is that Ignite.queue and Ignite.set javadocs mention
that null is an acceptable entry and also suggest that null can be used when
we do not want to create a new set or queue. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous update Data Grid Cache

2017-11-13 Thread Denis Magda
Hi,

To avoid any downtime here I would recommend to load the new data set in the 
same cache and remove the old entries from there once the preloading is 
finished. This approach assumes that both old and new entries can co-exist in 
the cache.

—
Denis

> On Oct 23, 2017, at 7:13 AM, blackfield  wrote:
> 
> 
> The use case is similar to OP.
> 
> We have a large table that we import/load from a dataset.
> 
> This dataset is generated periodically such that we need to re-load this
> whole dataset to Ignite.
> 
> If we re-load the new dataset against the same Ignite table, the users will
> be impacted during that window.
> 
> We are looking for ways to minimize the impact, e.g. load the data to
> another table->perform swap, etc.
> 
> Let me know if there are existing capabilities to minimize the above
> scenario.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite 2.3.0 hangs in startup

2017-11-13 Thread Sumanta Ghosh
Hi,
After enabling debug mode on ignite logging, the log shows ignite
continuously getting timeout in the following: -

2017-11-13 21:33:00.770 DEBUG 11232 --- [r-#23%bfs-grid%]
o.a.i.i.p.timeout.GridTimeoutProcessor   : Timeout has occurred
[obj=CancelableTask [id=2ea6e16bf51-d6f29d66-da45-4fe6-afda-81d6f43452a1,
endTime=1510588980769, period=5000, cancel=false,
task=org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager$BackupCleaner@650053ad],
process=true]
2017-11-13 21:33:02.067 DEBUG 11232 --- [r-#23%bfs-grid%]
o.a.i.i.p.timeout.GridTimeoutProcessor   : Timeout has occurred
[obj=CancelableTask [id=bda6e16bf51-d6f29d66-da45-4fe6-afda-81d6f43452a1,
endTime=1510588982062, period=3000, cancel=false,
task=org.apache.ignite.internal.processors.query.GridQueryProcessor$2@ff3c744],
process=true]
2017-11-13 21:33:02.493 DEBUG 11232 --- [r-#23%bfs-grid%]
o.a.i.i.p.timeout.GridTimeoutProcessor   : Timeout has occurred
[obj=CancelableTask [id=dda6e16bf51-d6f29d66-da45-4fe6-afda-81d6f43452a1,
endTime=1510588982486, period=3000, cancel=false, task=MetricsUpdater
[prevGcTime=67, prevCpuTime=5183,
super=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$MetricsUpdater@62b4cff]],
process=true]
2017-11-13 21:33:03.581 DEBUG 11232 --- [r-#24%bfs-grid%]
o.a.i.s.c.tcp.TcpCommunicationSpi: Balancing data [min0=0, minIdx=0,
max0=-1, maxIdx=-1]
2017-11-13 21:33:04.115 DEBUG 11232 --- [r-#29%bfs-grid%]
o.a.i.i.p.odbc.ClientListenerProcessor   : Balancing data [min0=0, minIdx=0,
max0=-1, maxIdx=-1]
2017-11-13 21:33:04.193 DEBUG 11232 --- [r-#35%bfs-grid%]
o.a.i.i.p.r.p.tcp.GridTcpRestProtocol: Balancing data [min0=0, minIdx=0,
max0=-1, maxIdx=-1]   

The above is getting printed continuously in the log. Can anyone please help
me understand what I am missing here?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error serialising arrays using Ignite 2.2 C# client

2017-11-13 Thread Alexey Popov
Hi Raymond,

You are right. True multidimensional arrays are not supported now in binary
serialized (C#). 
Jugged arrays work fine. So, you can use them or just one-dimensional array
with 2D-index calculation.

Anyway, I opened a ticket: https://issues.apache.org/jira/browse/IGNITE-6896
You can track a progress on this issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Adding more ignite-clients slows down the first client

2017-11-13 Thread Mikhail
Hi Tobias,

if you need to do an initial loading of a big amount of data, please read
the following section:
https://apacheignite.readme.io/docs/streaming--cep


>BUT when I add one more client to try to increased (scale horizontally) 
>I clearly see that the INSERT speed decreases on the first client by a
magnitude 
>(dropping about 7,500 inserts/second from it’s 19,000 !!!)

you didn't scale it horizontally, you just added more load for one ignite
server, if you need to scale it, you need
to add more server nodes, you set backup==1, that means that you need to add
at least 2 more nodes that see that it scales horizontally and linearly.

>Although the total speed increases to around ~25,000 INSERTs / second, this
is still not good, and does not scale very well.

ignite scales pretty good, looks like *one* ignite server node reaches its
limit, check cpu loading on the server node, I believe there would be 100%
CPU utilization. So to scale it you need to add server nodes that will store
data, because with you set up there's only 1 node that tries to handle
traffic from 4 client nodes. 
Also, you might be reach network IO limit, anyway in this case you also need
to add more server nodes to split network a traffic from 4 clients to
several server nodes.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Trouble with v 2.3.0 on AIX

2017-11-13 Thread Vladimir
Logs uploaded. Node1 started first. Node2 couldn't join the cluster. Search
for first "java.lang.IllegalArgumentException: null" in node2.log 

node1.zip
  

node2.zip
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Use IgniteQueue in client

2017-11-13 Thread Alexey Kukushkin
Created https://issues.apache.org/jira/browse/IGNITE-6889 (ignite.queue()
returns null when queue configuration is null)


Re: Use IgniteQueue in client

2017-11-13 Thread Alexey Kukushkin
Hi,

Two things:

   1. You must use same configuration (limit and CollectionConfiguration)
   on both client and server
   2. You cannot pass "null" as CollectionConfiguration. Pass empty
   CollectionConfiguration (new CollectionConfiguration()) if you have no
   specific settings.

This code must work (just tried a queue sample):

Server:
IgniteQueue q = ignite.queue("MyQueue", 10, new
CollectionConfiguration());

Client:
IgniteQueue q = ignite.queue("MyQueue", 10, new
CollectionConfiguration());


I do see a usability issue here that we cannot specify null as a collection
configuration. I will open a bug.


Re: Computation best practices

2017-11-13 Thread ihorps
hi @luqmanahmad
I was thinking about it as well a little bit in my project... and I'm not
sure if the cluster group is the best direction here. One way to think
(probably) about efficient resource usage is to bring job stealing into your
cluster. In this case you do a "default" setup where data collocation is
used and if there are nodes, which are just idle - then you can start data
transferring and bring calculating in there.
But you would have to do evaluation(leverage) of your real data distribution
about the best setup of job stealing vs collocated job calculations.

Best regards,
Ihor P.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Computation best practices

2017-11-13 Thread luqmanahmad
Hi Christos,

Thanks for getting back to me. 

There is no specific need for it - I am just trying to get my head around
for the best practices in that case data nodes does make sense to me to
perform the computation. To be honest just a bit confused with the Cluster
groups and trying to find a use case for it (apart from collecting metrics
and broadcasting). Cluster API does make sense and already using it but then
stuck with groups where it can be useful or maybe I am thinking too much :)

Thanks,
Luqman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Use IgniteQueue in client

2017-11-13 Thread arunkjn
Hi,

I want to create an IgniteQueue on a server node and get a reference to this
queue on a client node. I get a null pointer exception on client when I do
this.

This has been asked before in a very old thread and is reportedly fixed as
per  this issue   . But I
am getting the same behaviour using both ignite version 2.2 and 2.3 . Can
someone aware of the development in this regard please update me with the
status of this? Is this a reasonable expectation? Or are there other ways to
achieve this?

Thanks,
Arun



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Computation best practices

2017-11-13 Thread Christos Erotocritou
Hi Luqman,

Is there a specific reason why you want to keep the data nodes separate from 
the compute nodes?
As you say this beats the point of collocation. You should use the data nodes 
for compute and ensure you have a way to monitor and kill spurious tasks that 
may be executed on the grid.

C.

> On 13 Nov 2017, at 11:50, luqmanahmad  wrote:
> 
> Hi there,
> 
> Just trying to clarify few bits in my head around data nodes and compute
> nodes.
> 
> Let's say we have 10 data nodes which are solely storing the data using
> affinity collocation and we have 10 compute nodes as well for computing
> different tasks on the cluster.
> 
> Now we know that if we want to perform some operation on the data and we
> know where it resides we can use the affinity API to perform some operation
> on it which is indeed much better as there would be no data movement across
> the node which makes sense as well. But then on the other side, we have got
> compute nodes as well which are just sitting idle. Although we have got the
> luxury of using distributed closures then wouldn't it be an overhead of
> carrying all the data to a compute node to perform and then sending the data
> back.
> 
> Just trying to find a use case where the separate group of the cluster could
> be useful. For example data-node, compute-node etc. If anyone can clear this
> would be much appreciated.
> 
> Thanks,
> Luqman
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Computation best practices

2017-11-13 Thread luqmanahmad
Hi there,

Just trying to clarify few bits in my head around data nodes and compute
nodes.

Let's say we have 10 data nodes which are solely storing the data using
affinity collocation and we have 10 compute nodes as well for computing
different tasks on the cluster.

Now we know that if we want to perform some operation on the data and we
know where it resides we can use the affinity API to perform some operation
on it which is indeed much better as there would be no data movement across
the node which makes sense as well. But then on the other side, we have got
compute nodes as well which are just sitting idle. Although we have got the
luxury of using distributed closures then wouldn't it be an overhead of
carrying all the data to a compute node to perform and then sending the data
back.

Just trying to find a use case where the separate group of the cluster could
be useful. For example data-node, compute-node etc. If anyone can clear this
would be much appreciated.

Thanks,
Luqman




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error when deploying a service into an Ignite cluster

2017-11-13 Thread Andrey Mashenkov
Hi,

Would you please share a stacktrace?
Looks like an error occurs while service deployment was in progress.

On Mon, Nov 13, 2017 at 5:44 AM, Raymond Wilson 
wrote:

> I have made a simple C# service in Ignite 2.2. When I deploy it, like this:
>
>
>
> services.DeployNodeSingleton(“AddSurveyedSurface”, new
> AddSurveyedSurfaceService());
>
> I get the following error:
>
>
>
> ERROR 2017-11-13 15:29:35,984 319109ms GridServiceProcessor
> ?  - Error when executing service: AddSurveyedSurface
>
> Below is the filtered section of the DEBUG level log from BareTail which
> has references to the GridServiceProcessor or to AddSurveyedSurface, but
> I’m having trouble seeing what the error is. Is there a way of getting more
> specific error information?
>
>
>
>
>
> 2017-11-13 15:39:42:0661 + 245 DEBUG 2017-11-13
> 15:39:42,402 92483ms query
> ?  - Filter invoked for event
> [evt=CacheContinuousQueryEvent [evtType=CREATED,
> key=GridServiceDeploymentKey [name=AddSurveyedSurface],
> newVal=GridServiceDeployment [nodeId=2c73436e-1136-446e-b513-98db94337601,
> cfg=LazyServiceConfiguration [srvcClsName=org.apache.
> ignite.internal.processors.platform.dotnet.PlatformDotNetServiceImpl,
> svcCls=, nodeFilterCls=IsAllPredicate]], oldVal=null, partCntr=38],
> primary=true, notify=true]
>
> 2017-11-13 15:39:42:0661 + 246 DEBUG 2017-11-13
> 15:39:42,402 92483ms query
> ?  - Send the following event to listener:
> CacheContinuousQueryEntry [evtType=CREATED, key=KeyCacheObjectImpl
> [part=17, val=GridServiceDeploymentKey [name=AddSurveyedSurface],
> hasValBytes=true], newVal=CacheObjectImpl [val=GridServiceDeployment
> [nodeId=2c73436e-1136-446e-b513-98db94337601,
> cfg=LazyServiceConfiguration [srvcClsName=org.apache.
> ignite.internal.processors.platform.dotnet.PlatformDotNetServiceImpl,
> svcCls=, nodeFilterCls=IsAllPredicate]], hasValBytes=true], oldVal=null,
> cacheId=-2100569601, part=17, updateCntr=38, flags=0,
> topVer=AffinityTopologyVersion [topVer=6, minorTopVer=0], filteredCnt=0]
>
> 2017-11-13 15:39:42:0661 + 247 ERROR 2017-11-13
> 15:39:42,402 92483ms GridServiceProcessor
> ?  - Error when executing service: AddSurveyedSurface
>
> 2017-11-13 15:39:42:0661 + 248 DEBUG 2017-11-13
> 15:39:42,402 92483ms query
> ?  - Entry updated on affinity node
> [evt=CacheContinuousQueryEvent [evtType=CREATED,
> key=GridServiceDeploymentKey [name=AddSurveyedSurface],
> newVal=GridServiceDeployment [nodeId=2c73436e-1136-446e-b513-98db94337601,
> cfg=LazyServiceConfiguration [srvcClsName=org.apache.
> ignite.internal.processors.platform.dotnet.PlatformDotNetServiceImpl,
> svcCls=, nodeFilterCls=IsAllPredicate]], oldVal=null, partCntr=38],
> primary=true]
>
> 2017-11-13 15:39:42:0661 + 249 DEBUG 2017-11-13
> 15:39:42,402 92483ms query
> ?  - Filter invoked for event
> [evt=CacheContinuousQueryEvent [evtType=CREATED,
> key=GridServiceDeploymentKey [name=AddSurveyedSurface],
> newVal=GridServiceDeployment [nodeId=2c73436e-1136-446e-b513-98db94337601,
> cfg=LazyServiceConfiguration [srvcClsName=org.apache.
> ignite.internal.processors.platform.dotnet.PlatformDotNetServiceImpl,
> svcCls=, nodeFilterCls=IsAllPredicate]], oldVal=null, partCntr=38],
> primary=true, notify=true]
>
> 2017-11-13 15:39:42:0661 + 250 DEBUG 2017-11-13
> 15:39:42,403 92484ms query
> ?  - Send the following event to listener:
> CacheContinuousQueryEntry [evtType=CREATED, key=KeyCacheObjectImpl
> [part=17, val=GridServiceDeploymentKey [name=AddSurveyedSurface],
> hasValBytes=true], newVal=CacheObjectImpl [val=GridServiceDeployment
> [nodeId=2c73436e-1136-446e-b513-98db94337601,
> cfg=LazyServiceConfiguration [srvcClsName=org.apache.
> ignite.internal.processors.platform.dotnet.PlatformDotNetServiceImpl,
> svcCls=, nodeFilterCls=IsAllPredicate]], hasValBytes=true], oldVal=null,
> cacheId=-2100569601, part=17, updateCntr=38, flags=0,
> topVer=AffinityTopologyVersion [topVer=6, minorTopVer=0], filteredCnt=0]
>
> 2017-11-13 15:39:42:0661 + 251 DEBUG 2017-11-13
> 15:39:42,403 92484ms GridCacheMapEntry
> ?  - Updated cache entry [val=CacheObjectImpl
> [val=GridServiceDeployment [nodeId=2c73436e-1136-446e-b513-98db94337601,
> cfg=LazyServiceConfiguration [srvcClsName=org.apache.
> ignite.internal.processors.platform.dotnet.PlatformDotNetServiceImpl,
> svcCls=, nodeFilterCls=IsAllPredicate]], hasValBytes=true], old=null,
> entry=GridDhtCacheEntry [rdrs=[], part=17, super=GridDistributedCacheEntry
> [super=GridCacheMapEntry [key=KeyCacheObjectImpl [part=17,
> val=GridServiceDeploymentKey [name=AddSurveyedSurface], hasValBytes=true],
> val=CacheObjectImpl [val=GridServiceDeployment 
> [nodeId=2c73436e-1136-446e-b513-98db94337601,
> cfg=LazyServiceConfiguration [srvcClsName=org.apache.
> 

Cluster nodes failures while under load

2017-11-13 Thread alin-corodescu
Hello,

I have a pretty vague description for the problem I am having with my Ignite
cluster and I was wondering if anyone is experiencing similar behaviour. I
am holding an in-memory database on an Ignite cluster, with about 500 GB of
data in it (across 3 nodes). Data is constantly being streamed into the
cache while other entries are evicted / expire (no memory problems). When
trying to run complex queries (diverse, and sometimes they work, sometimes
they don't) using the JDBC driver, some nodes fail (they either leave the
topolgy - but somehow the node is still up, forming a topology by himself,
or the ignite node gets shut down completely) - this usually happens after a
spike in CPU usage (due to query execution). Logs aren't that helpful in
this matter, simply saying that the respective node is unreachable). I
tested this cluster using the 2.1 version of Ignite, which doesn't lazily
stream the result sets from other nodes to the reducer node (as it can be
done in 2.3), and I suspect the behaviour might be caused by loading the
whole result set in memory at once. 
I tried adjusting the JVM heap size to 20G per node, set failure detection
timeout on each node to be 5 (10 times higher than the default), reduce
query parallelism for the cache, increased the system thread pool size, but
to no avail. One thing to mention is that the cpu usage even during spikes
was at about 70%. 

I am trying my luck here, maybe someone experienced something similar, as I
am aware that the description is not very precise. I will update with any
other findings which are relevant to this problem.

Thank you,
Alin



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Node failed to startup due to deadlock

2017-11-13 Thread Alexey Popov
Ups, there is a some issue with text formatted below. It should be

just a replace 
cache2.lock("fake");
with 
Lock lock = ignite0.reentrantLock("fake", true, false, true);
where ignite0 is a "final" copy for a new Thread 
final Ignite ignite0 = ignite;

Thank you,
Alexey

From: Alexey Popov
Sent: Monday, November 13, 2017 2:12 PM
To: user@ignite.apache.org
Subject: RE: Node failed to startup due to deadlock

Hi Naresh,

I still don't have a clear understanding of your case. Very probably you
just need a Cache Store with Read-Through enabled. Please have a look at [1]
and Cache Store examples. 

As for code provided - you can have a workaround here until
https://issues.apache.org/jira/browse/IGNITE-6380 is ready. Please use
Ignite.reentrantLock() instead of transactional cache entry lock, i.e:

just a replace 

with 

where ignite0 is a "final" copy for a new Thread


[1] https://apacheignite.readme.io/docs/3rd-party-store

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Node failed to startup due to deadlock

2017-11-13 Thread Alexey Popov
Hi Naresh,

I still don't have a clear understanding of your case. Very probably you
just need a Cache Store with Read-Through enabled. Please have a look at [1]
and Cache Store examples. 

As for code provided - you can have a workaround here until
https://issues.apache.org/jira/browse/IGNITE-6380 is ready. Please use
Ignite.reentrantLock() instead of transactional cache entry lock, i.e:

just a replace 

with 

where ignite0 is a "final" copy for a new Thread


[1] https://apacheignite.readme.io/docs/3rd-party-store

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Prohibit "Node is out of topology"

2017-11-13 Thread Lukas Lentner
Hi,

pretty randomly we discover that certain ignite nodes shutdown without a 
reasonable outer cause. The grid was not in use the time this happened (no load 
on the system):

2017-11-11 06:06:26:752 + [grid-timeout-worker-#23] INFO 
org.apache.ignite.internal.IgniteKernal -
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=9512cd22, uptime=18:03:23.333]
^-- H/N/C [hosts=1, nodes=6, CPUs=8]
^-- CPU [cur=0.1%, avg=0.09%, GC=0%]
^-- PageMemory [pages=2256]
^-- Heap [used=4831MB, free=60.68%, comm=12288MB]
^-- Non heap [used=102MB, free=96.91%, comm=106MB]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=6, qSize=0]
^-- Outbound messages queue [size=0]
2017-11-11 06:06:26:752 + [grid-timeout-worker-#23] INFO 
org.apache.ignite.internal.IgniteKernal - FreeList [name=null, buckets=256, 
dataPages=1, reusePages=0]
2017-11-11 06:07:08:789 + [tcp-disco-sock-reader-#5] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Finished serving remote 
node connection [rmtAddr=/10.10.100.251:58485, rmtPort=58485
2017-11-11 06:07:09:630 + [tcp-disco-srvr-#3] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - TCP discovery accepted 
incoming connection [rmtAddr=/10.10.100.251, rmtPort=45719]
2017-11-11 06:07:09:630 + [tcp-disco-srvr-#3] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - TCP discovery spawning a 
new thread for connection [rmtAddr=/10.10.100.251, rmtPort=45719]
2017-11-11 06:07:09:631 + [tcp-disco-sock-reader-#10] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Started serving remote 
node connection [rmtAddr=/10.10.100.251:45719, rmtPort=45719]
2017-11-11 06:07:09:632 + [tcp-disco-sock-reader-#10] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Received ping request 
from the remote node [rmtNodeId=c68d7211-41ac-4364-81b5-46f55f62463e, 
rmtAddr=/10.10.100.251:45719, rmtPort=45719]
2017-11-11 06:07:09:632 + [tcp-disco-sock-reader-#10] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Finished writing ping 
response [rmtNodeId=c68d7211-41ac-4364-81b5-46f55f62463e, 
rmtAddr=/10.10.100.251:45719, rmtPort=45719]
2017-11-11 06:07:09:632 + [tcp-disco-sock-reader-#10] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Finished serving remote 
node connection [rmtAddr=/10.10.100.251:45719, rmtPort=45719
2017-11-11 06:07:09:860 + [tcp-disco-srvr-#3] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - TCP discovery accepted 
incoming connection [rmtAddr=/10.10.100.251, rmtPort=60777]
2017-11-11 06:07:09:860 + [tcp-disco-srvr-#3] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - TCP discovery spawning a 
new thread for connection [rmtAddr=/10.10.100.251, rmtPort=60777]
2017-11-11 06:07:09:860 + [tcp-disco-sock-reader-#11] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Started serving remote 
node connection [rmtAddr=/10.10.100.251:60777, rmtPort=60777]
2017-11-11 06:07:09:864 + [tcp-disco-msg-worker-#2] WARN 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Node is out of topology 
(probably, due to short-time network problems).
2017-11-11 06:07:09:864 + [disco-event-worker-#41] WARN 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager - Local node 
SEGMENTED: TcpDiscoveryNode [id=9512cd22-4e04-4627-9cd7-902b0143725c, 
addrs=[127.0.0.1, 172.17.0.2], sockAddrs=[/127.0.0.1:31000, 
2f35d5160c01/172.17.0.2:31000], discPort=31000, order=2, intOrder=2, 
lastExchangeTime=1510380429855, loc=true, ver=2.3.0#20171028-sha1:8add7fd5, 
isClient=false]
2017-11-11 06:07:09:865 + [tcp-disco-sock-reader-#11] INFO 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Finished serving remote 
node connection [rmtAddr=/10.10.100.251:60777, rmtPort=60777
2017-11-11 06:07:09:867 + [disco-event-worker-#41] WARN 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager - Stopping 
local node according to configured segmentation policy.

The ignite node then shuts down (which is correct due to a failed node). Our 
failsafe mechanisms can recover from this, but we would like to know how to 
circumvent these failing nodes in future. What could be the reason for such a 
node segmentation. As this runs on AWS and we even see a similar error when the 
nodes are running on the same vm I am pretty sure it is NOT a network issue ...

Thankx
Lukas






Lukas Lentner, B. Sc.
St.-Cajetan-Straße 13
81669 München
Deutschland
Fon: +49 / 89  / 44 38 61 27
Mobile:  +49 / 176 / 24 77 09 22
E-Mail:  kont...@lukaslentner.de
Website: www.LukasLentner.de

IBAN:DE33 7019  0001 1810 17
BIC: GENODEF1M01 (Münchner Bank)



Re: Problem with spring-data-commons 2

2017-11-13 Thread Alexey Kukushkin
Hi,

Apparently you cannot just bump the Spring dependency due to that new
deleteAll(Iterable ids) they added in CrudRepository 2.0 that conflicts
with the same method in our IgniteRepository.

I opened a tickets to support Spring 2.0:
https://issues.apache.org/jira/browse/IGNITE-6879


RE: write behind performance impacting main thread. Write behindbuffer is never full

2017-11-13 Thread Alexey Popov
Hi Larry,

BTW, there is an open improvement
https://issues.apache.org/jira/browse/IGNITE-5003
for the issue you faced.

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Incorrect map query built when joining with a subquery with group by statement

2017-11-13 Thread alin-corodescu
Following up on the previous discussion, I have filed an issue regarding the
problem : https://issues.apache.org/jira/browse/IGNITE-6865
I have found a workaround this issue by using FROM (select * from Persons)
instead of FROM Persons directly, which seems like a bug, because the 2
queries should be semantically equivalent.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/