Ignite Cluster getting stuck when new node Join or release

2018-06-03 Thread Sambhaji Sawant
I have 3 node cluster with 20+ client and it's running in spark
context.Initially it working fine but randomly get issue whenever new node
i.e. client try to connect with cluster.The cluster getting inoperative.I
have got following logs when its stuck.If I restart any Ignite server
explicitly then its release and work fine.I have use Ignite 2.4.0 version.
same issue produced in Ignite 2.5.0 version too.

client side Logs

Failed to wait for partition map exchange [topVer=AffinityTopologyVersion
[topVer=44, minorTopVer=0], node=4d885cfd-45ed-43a2-8088-f35c9469797f].
Dumping pending objects that might be the cause:

GridDhtPartitionsExchangeFuture
[topVer=AffinityTopologyVersion [topVer=44, minorTopVer=0],
evt=NODE_JOINED, evtNode=TcpDiscoveryNode
[id=4d885cfd-45ed-43a2-8088-f35c9469797f, addrs=[0:0:0:0:0:0:0:1%lo,
10.13.10.179, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0,
/127.0.0.1:0, hdn6.mstorm.com/10.13.10.179:0], discPort=0, order=44,
intOrder=0, lastExchangeTime=1527651620413, loc=true,
ver=2.4.0#20180305-sha1:aa342270, isClient=true], done=false]

Failed to wait for partition map exchange [topVer=AffinityTopologyVersion
[topVer=44, minorTopVer=0], node=4d885cfd-45ed-43a2-8088-f35c9469797f].
Dumping pending objects that might be the cause:

GridDhtPartitionsExchangeFuture
[topVer=AffinityTopologyVersion [topVer=44, minorTopVer=0],
evt=NODE_JOINED, evtNode=TcpDiscoveryNode
[id=4d885cfd-45ed-43a2-8088-f35c9469797f, addrs=[0:0:0:0:0:0:0:1%lo,
10.13.10.179, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0,
/127.0.0.1:0, hdn6.mstorm.com/10.13.10.179:0], discPort=0, order=44,
intOrder=0, lastExchangeTime=1527651620413, loc=true,
ver=2.4.0#20180305-sha1:aa342270, isClient=true], done=false]

Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock. ^-- Long running transactions (ignore if this
is the case). ^-- Unreleased explicit locks.

Still waiting for initial partition map exchange
[fut=GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=4d885cfd-45ed-43a2-8088-f35c9469797f, addrs=

Server Side Logs

Possible starvation in striped pool. Thread name: sys-stripe-0-#1 Queue:
[Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtTxPrepareResponse
[nearEvicted=null, futId=869dd4ca361-fe7e167d-4d80-4f57-b004-13359a9f2c11,
miniId=1, super=GridDistributedTxPrepareResponse [txState=null, part=-1,
err=null, super=GridDistributedBaseMessage [ver=GridCacheVersion
[topVer=139084030, order=1527604094903, nodeOrder=1], committedVers=null,
rolledbackVers=null, cnt=0, super=GridCacheIdMessage [cacheId=0]],
Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
ordered=false, timeout=0, skipOnTimeout=false,
msg=GridDhtAtomicSingleUpdateRequest [key=KeyCacheObjectImpl [part=984,
val=null, hasValBytes=true], val=BinaryObjectImpl [arr= true, ctx=false,
start=0], prevVal=null, super=GridDhtAtomicAbstractUpdateRequest
[onRes=false, nearNodeId=null, nearFutId=0, flags=,
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$DeferredUpdateTimeout@2735c674,
Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtTxPrepareRequest
[nearNodeId=628e3078-17fd-4e49-b9ae-ad94ad97a2f1,
futId=6576e4ca361-6e7cdac2-d5a3-4624-9ad3-b93f25546cc3, miniId=1,
topVer=AffinityTopologyVersion [topVer=20, minorTopVer=0],
invalidateNearEntries={}, nearWrites=null, owned=null,
nearXidVer=GridCacheVersion [topVer=139084030, order=1527604094933,
nodeOrder=2], subjId=628e3078-17fd-4e49-b9ae-ad94ad97a2f1, taskNameHash=0,
preloadKeys=null, super=GridDistributedTxPrepareRequest [threadId=86,
concurrency=OPTIMISTIC, isolation=READ_COMMITTED, writeVer=GridCacheVersion
[topVer=139084030, order=1527604094935, nodeOrder=2], timeout=0,
reads=null, writes=[IgniteTxEntry [key=BinaryObjectImpl [arr= true,
ctx=false, start=0], cacheId=-1755241537, txKey=null, val=[op=UPDATE,
val=BinaryObjectImpl [arr= true, ctx=false, start=0]], prevVal=[op=NOOP,
val=null], oldVal=[op=NOOP, val=null], entryProcessorsCol=null, ttl=-1,
conflictExpireTime=-1, conflictVer=null, explicitVer=null, dhtVer=null,
filters=null, filtersPassed=false, filtersSet=false, entry=null,
prepared=0, locked=false, nodeId=null, locMapped=false, expiryPlc=null,
transferExpiryPlc=false, flags=0, partUpdateCntr=0, serReadVer=null,
xidVer=null]], dhtVers=null, txSize=0, plc=2, txState=null,
flags=onePhase|last, super=GridDistributedBaseMessage [ver=GridCacheVersion
[topVer=139084030, order=1527604094933, nodeOrder=2], committedVers=null,
rolledbackVers=null, cnt=0, super=GridCacheIdMessage [cacheId=0]],
Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
ordered=false, timeout=0, skipOnTimeout=false,
msg=GridDhtAtomicDeferredUpdateResponse [futIds=GridLongList [idx=2,
arr=[65774,65775], 

Re: The thread which is inserting data into Ignite is hung

2018-06-03 Thread praveeng
Hi All,

We have identified the following issues and fixed after that onwards we
haven't got any issue.
1. Cursor is not closed when we do a query
2. backups 2147483646 to 1 or 2 [ i am not sure will it really cause any
issue or not.]
3. TCP port 9090 is opened from client to server but not opened from server
to client.

Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Session.removeAttribute is not working as expected

2018-06-03 Thread Roman Guseinov
Hi,

1. Could you please clarify what exactly doesn't work as expected? Does
session.attributeNames() result include removed attributes?

2. JAVA_HOME=/opt/java/jdk-10.0.1 - I'm not sure that Apache Ignite was
tested on Java 10. Could you try to reproduce the issue on JDK8?

3. Could you share a reproducer - simple project on GitHub?

Best Regards,
Roman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-06-03 Thread Roman Guseinov
Hi Humphrey, 

Currently, front-end is included into back-end container as static files
(html, js). This way we don't have to open two ports. 80 is fully enough.
3001 isn't used now.

Regarding documentation, we don't have that one for web agent docker image
because it isn't public yet. I hope it will be published soon.

According to your logs, now the agent is able to connect to web console and
Ignite node as well.

Best Regards,
Roman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite client mvn artifact's purpose?

2018-06-03 Thread mlekshma
Hi Folks,

I notice there is a mvn artifact with artifact id "ignite-clients". What is
its purpose?

https://mvnrepository.com/artifact/org.apache.ignite/ignite-clients/2.4.0
  

Thanks
Muthu



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Affinity colocation and sql queries

2018-06-03 Thread Stéphane Gayet
Hi All,


I'm not sure to have well-understood Ignite affinity colocation. I have several 
questions.


1/ Does sql queries benefit from affinity colocation?


2/ The use case is an sql query like :

SELECT SUBSET1.KEY AS SET1_KEY, SUBSET2.KEY AS SET2_KEY

FROM WHOLESET AS SUBSET1, WHOLESET AS SUBSET2

WHERE SUBSET1.KEY = "key1" AND SUBSET2.KEY = "key2" AND (some other conditions)


I know that items with keys "key1" and "key2" will be crossed together almost 
all the time and I plan to define an affinity key so:

- items with "key1" and "key2" => affinity key 1

- items with "key3" and "key4" => another affinity key

and so on...


Does this work and will speed up the request?


3/ If I define an affinity key on an object, should the field marked also as 
[SqlQueryField] although it will not be part of the sql query? If yes, should 
it be indexed?

4/ Or, this doesn't work but it will if I split each subset in different caches 
and define affinity key as above.
cache1 = subset items with key "key1"
cache2 = subset items with key "key2"
...


5/ Or, nope, there is no way to achieve that.


Regards,

Stéphane Gayet