Re: Date PartitIon and Jdbc Conneciton Host

2019-07-16 Thread Anton Kurbanov
Hi,

You may refer to following doc pages to understand how it works currently:
https://apacheignite.readme.io/docs/cache-modes
https://apacheignite.readme.io/docs/affinity-collocation

Affinity function uses cache entry key to determine to which node write
will be enlisted and it is done by Rendezvous Affinity Function which is
default and shipped with Ignite, you can find the links below the message.

What kind of exception do you see, are you able to provide log? Is it java
heap out of memory exception or IgniteOutOfMemoryException? What was the
data distribution at the moment of issue happening?

Javadoc reference:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/AffinityFunction.html#partition-java.lang.Object-
Implementation reference:
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunction.java#L491

пт, 12 июл. 2019 г. в 05:59, shicheng31...@gmail.com <
shicheng31...@gmail.com>:

> Hi:
> Ignite does not use a master-slave architecture, so when we use sql,
> we can link to any node. If we write data, then the data is partitioned
> according to certain rules, or just write to the node written on JDBC URl?
>
> I have done a test .When I inserted   so many records that the node I
> connected could not bear , then an exception was thrown on the connected
> node because of OOM . And other nodes were all fine . This might show that
> there was no partiton?
>
>
> --
> shicheng31...@gmail.com
>


Re: Node not joined to the cluster - joining node doesn't have encryption data

2019-07-16 Thread Anton Kurbanov
Hi shahidv,

There are definite signs of network issues between the cluster nodes. In
order to find where the issue resides, could you please post all logs from
server logs?

>Failed to
send message to next node [msg=TcpDiscoveryNodeAddedMessage
[node=TcpDiscoveryNode [id=bf63af3c-348f-4c8f-a9a9-83f1ccd08a82,
addrs=[10.174.92.75, 127.0.0.1, 192.168.0.27],
sockAddrs=[/10.174.92.75:47500, /127.0.0.1:47500, /192.168.0.27:47500]

>[15:54:48,234][WARNING][disco-event-worker-#42][GridDiscoveryManager] Node
FAILED: TcpDiscoveryNode [id=bf63af3c-348f-4c8f-a9a9-83f1ccd08a82,
addrs=[10.174.92.75, 127.0.0.1, 192.168.0.27],
sockAddrs=[/10.174.92.75:47500, /127.0.0.1:47500, /192.168.0.27:47500],
discPort=47500, order=4, intOrder=3, lastExchangeTime=1562754278190,
loc=false, ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]

Do you have all ports mentioned in logs open in all directions? Are you
able to ping through all involved addresses?

ср, 10 июл. 2019 г. в 13:31, shahidv :

> node seems to be joined and failed , anyone please help,
>
> [15:54:48,224][WARNING][tcp-disco-msg-worker-#2][TcpDiscoverySpi] Failed to
> send message to next node [msg=TcpDiscoveryNodeAddedMessage
> [node=TcpDiscoveryNode [id=bf63af3c-348f-4c8f-a9a9-83f1ccd08a82,
> addrs=[10.174.92.75, 127.0.0.1, 192.168.0.27],
> sockAddrs=[/10.174.92.75:47500, /127.0.0.1:47500, /192.168.0.27:47500],
> discPort=47500, order=0, intOrder=3, lastExchangeTime=1562754278190,
> loc=false, ver=2.7.5#20190603-sha1:be4f2a15, isClient=false],
> dataPacket=o.a.i.spi.discovery.tcp.internal.DiscoveryDataPacket@4f27e93,
> discardMsgId=null, discardCustomMsgId=null, top=null, clientTop=null,
> gridStartTime=1562753684554, super=TcpDiscoveryAbstractMessage
> [sndNodeId=null, id=8f77e5bdb61-2e7c59eb-7ed7-4f8c-8497-1e29b3a3c2a0,
> verifierNodeId=2e7c59eb-7ed7-4f8c-8497-1e29b3a3c2a0, topVer=0,
> pendingIdx=0,
> failedNodes=null, isClient=false]], next=TcpDiscoveryNode
> [id=bf63af3c-348f-4c8f-a9a9-83f1ccd08a82, addrs=[10.174.92.75, 127.0.0.1,
> 192.168.0.27], sockAddrs=[/10.174.92.75:47500, /127.0.0.1:47500,
> /192.168.0.27:47500], discPort=47500, order=0, intOrder=3,
> lastExchangeTime=1562754278190, loc=false,
> ver=2.7.5#20190603-sha1:be4f2a15,
> isClient=false], errMsg=Failed to send message to next node
> [msg=TcpDiscoveryNodeAddedMessage [node=TcpDiscoveryNode
> [id=bf63af3c-348f-4c8f-a9a9-83f1ccd08a82, addrs=[10.174.92.75, 127.0.0.1,
> 192.168.0.27], sockAddrs=[/10.174.92.75:47500, /127.0.0.1:47500,
> /192.168.0.27:47500], discPort=47500, order=0, intOrder=3,
> lastExchangeTime=1562754278190, loc=false,
> ver=2.7.5#20190603-sha1:be4f2a15,
> isClient=false],
> dataPacket=o.a.i.spi.discovery.tcp.internal.DiscoveryDataPacket@4f27e93,
> discardMsgId=null, discardCustomMsgId=null, top=null, clientTop=null,
> gridStartTime=1562753684554, super=TcpDiscoveryAbstractMessage
> [sndNodeId=null, id=8f77e5bdb61-2e7c59eb-7ed7-4f8c-8497-1e29b3a3c2a0,
> verifierNodeId=2e7c59eb-7ed7-4f8c-8497-1e29b3a3c2a0, topVer=0,
> pendingIdx=0,
> failedNodes=null, isClient=false]], next=ClusterNode
> [id=bf63af3c-348f-4c8f-a9a9-83f1ccd08a82, order=0, addr=[10.174.92.75,
> 127.0.0.1, 192.168.0.27], daemon=false]]]
> [15:54:48,227][INFO][disco-event-worker-#42][GridDiscoveryManager] Added
> new
> node to topology: TcpDiscoveryNode
> [id=bf63af3c-348f-4c8f-a9a9-83f1ccd08a82,
> addrs=[10.174.92.75, 127.0.0.1, 192.168.0.27],
> sockAddrs=[/10.174.92.75:47500, /127.0.0.1:47500, /192.168.0.27:47500],
> discPort=47500, order=4, intOrder=3, lastExchangeTime=1562754278190,
> loc=false, ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]
> [15:54:48,232][INFO][disco-event-worker-#42][GridDiscoveryManager] Topology
> snapshot [ver=4, locNode=2e7c59eb, servers=2, clients=0, state=ACTIVE,
> CPUs=16, offheap=8.0GB, heap=2.0GB]
> [15:54:48,232][INFO][disco-event-worker-#42][GridDiscoveryManager]   ^--
> Baseline [id=0, size=1, online=1, offline=0]
> [15:54:48,234][INFO][exchange-worker-#43][time] Started exchange init
> [topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0],
> mvccCrd=MvccCoordinator [nodeId=2e7c59eb-7ed7-4f8c-8497-1e29b3a3c2a0,
> crdVer=1562753684555, topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=0]], mvccCrdChange=false, crd=true, evt=NODE_JOINED,
> evtNode=bf63af3c-348f-4c8f-a9a9-83f1ccd08a82, customEvt=null,
> allowMerge=true]
> [15:54:48,234][WARNING][disco-event-worker-#42][GridDiscoveryManager] Node
> FAILED: TcpDiscoveryNode [id=bf63af3c-348f-4c8f-a9a9-83f1ccd08a82,
> addrs=[10.174.92.75, 127.0.0.1, 192.168.0.27],
> sockAddrs=[/10.174.92.75:47500, /127.0.0.1:47500, /192.168.0.27:47500],
> discPort=47500, order=4, intOrder=3, lastExchangeTime=1562754278190,
> loc=false, ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]
> [15:54:48,236][INFO][exchange-worker-#43][GridDhtPartitionsExchangeFuture]
> Finished waiting for partition release future
> [topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0], waitTime=0ms,
> futInfo=NA, mode=DISTRIBUTED]
> [15:54:48,238][INFO][exchange-wo

Re: Question on REPLICATED Cache

2019-07-16 Thread Anton Kurbanov
Hi Sankar,

Replicated cache will have one primary partition on a single node and
backup partition copies on all other nodes in cluster. Could you post a bit
more details for this? Do you have any other configurations that may affect
this like node filters? Are you able to check cache distribution for
partitions for that particular cache you are having issues with? How many
server nodes do you have in your topology?

Do you see scans on other nodes giving empty results or all of them just
returning subsets of data?

ср, 10 июл. 2019 г. в 10:03, Sankar Ramiah :

> I have three server nodes in my cluster. I configure a cache to be in
> REPLICATED mode.
>
> However when I try to read the cache content of myCache through web
> console (Scan in Selected Node), I see the content only in one node. Is it
> not supposed to be available in all the three server nodes in case of
> REPLICATED mode?
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: Ignite Performance

2019-07-16 Thread Anton Kurbanov
Hi shahidv,

Could you post please the details for configurations of caches, ideas of
queries and activation logs?

It's not easy to blindly give some recommendations to follow as there is
usually no silver bullet for SQL queries execution in a distributed
environment as you have to work on each specific query to mitigate possible
bottlenecks: make them as local as possible using data collocation, use
some precalculated data if possible, look into join order (larger tables
should be joined to smaller ones), probably you might make some small
tables replicated -- there are a lot of things that are might be reasonable
to be considered, but details for SQL tables/queries are necessary for
this.

Here are couple of things I'd like to know besides previously asked ones:

-- What was server node doing while starting up? Was that storage pages
loading, index building, wal files rollover (have you removed by any chance
wal/archive folder)?
-- Do you have any kind of subqueries / group by statements in your queries?
-- What is the conditions for OOM? Are you able to load whole data set?
-- As for the data size, what is the cache entry structure and what is the
median/average size of the record? What is the estimated size of raw data
being inserted into Ignite? Do you have any custom indexes and if so, how
many and what columns are there?

вт, 9 июл. 2019 г. в 16:30, Andrey Dolmatov :

> Hi,
> *-when restarted node, It took 6 hrs to activate again, *
> If you activated cluster on startup, before data loading, then after
> restart, cluster is active.
>
> *-queries are very slower, *
> it depends. What SQL queries do you have? JOIN, AGG?
>
> *-out of memory issue when loading data. *
> Show your CacheConfiguration
>
> * -huge size for data.*
> If you enable QueryEnity for SQL support, Ignite consumes much more memory.
>
>
> *Best Regards, Andrey Dolmatov*
>
> вт, 9 июл. 2019 г. в 16:10, shahidv :
>
>> Hi,
>>
>> We are looking for new database technologies because of some performance
>> issues in existing.
>> So we tried ignite with 1 node cluster (28gb ram), loaded 90 million
>> records
>> to the cache.
>> But we faced several issues like,
>> -when restarted node, It took 6 hrs to activate again,
>> -queries are very slower,
>> -out of memory issue when loading data.
>> -huge size for data.
>>
>> We are still learning stage to Ignite  and it may be some configuration
>> issues.
>> We have currently large volume of data and processing heavy aggregations.
>>
>> Anyone please tell me Ignite will be a solution for performance and what
>> is
>> the issue with my current setup. Following is my config. file,
>> default-config.xml
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/t2499/default-config.xml>
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: [IgniteKernal%ServerNode] Exception during start processors, node will be stopped and close connections java.lang.NullPointerException: null......cache.GridCacheUtils.affinityNode

2019-07-16 Thread jothipandiyan
Things that are needed to be clarified listed below if the closed ticket is
still reproducible:
-- On which Ignite version persisted data was saved on disk?
 With 2.8 nightly build version
-- Do I correctly understand the use-case: start server and client (on some
version? which one?), load data and restart server.
  -- 2.8 nightly build version

-- What are the specifics of custom affinity?
no affinity
-- Are you using cache stores?
   yes


We are in production environment.. to solve the above issue we trying to
run in nightly version .

On Tue, Jul 16, 2019, 10:19 PM Anton Kurbanov  wrote:

> Hi siva,
>
> It's not completely clear, could you provide a bit more details into this?
>
> Things that are needed to be clarified listed below if the closed ticket
> is still reproducible:
> -- On which Ignite version persisted data was saved on disk?
> -- Do I correctly understand the use-case: start server and client (on
> some version? which one?), load data and restart server.
> -- What are the specifics of custom affinity?
> -- Are you using cache stores?
>
> The primary goal is actually to understand whether this is a reproducer to
> a fixed bug or some other issue.
>
> пн, 15 июл. 2019 г. в 18:33, siva :
>
>> Hi,
>> I have .Net Core(2.2.6) Ignite Client And Server Application. Now I am
>> using
>> Apache Ignite Nightly build(apache-ignite-2.8.0-SNAPSHOT-bin)(taken master
>> code, build and added Apache.Ignite core.dll into my Application Project)
>> because due Application in production every time server on the start/stop
>> loading data taking more time. but not able to start the Server with disk
>> cache data.
>>
>> Once I started Server and client and loaded data after stopped when
>> starting
>> the server with existing disk data getting the following error in the log.
>>
>> So I am Not able to start Server with disk data.
>>
>> here the issue but it already closed and  Resolved
>>   .
>>
>> GitHub 
>>
>> Configuration File
>> -
>> server-config.xml
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/t1379/server-config.xml>
>>
>>
>> here is the log file
>> 
>> ignite.log
>> 
>>
>> So whether this issue still there or not, whether I can use or not still I
>> have to wait for the release or any other way. Any configuration I need to
>> configure to Start Server with existing data.
>>
>> please give suggestion to me.
>>
>> Thanks.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: [IgniteKernal%ServerNode] Exception during start processors, node will be stopped and close connections java.lang.NullPointerException: null......cache.GridCacheUtils.affinityNode

2019-07-16 Thread Anton Kurbanov
Hi siva,

It's not completely clear, could you provide a bit more details into this?

Things that are needed to be clarified listed below if the closed ticket is
still reproducible:
-- On which Ignite version persisted data was saved on disk?
-- Do I correctly understand the use-case: start server and client (on some
version? which one?), load data and restart server.
-- What are the specifics of custom affinity?
-- Are you using cache stores?

The primary goal is actually to understand whether this is a reproducer to
a fixed bug or some other issue.

пн, 15 июл. 2019 г. в 18:33, siva :

> Hi,
> I have .Net Core(2.2.6) Ignite Client And Server Application. Now I am
> using
> Apache Ignite Nightly build(apache-ignite-2.8.0-SNAPSHOT-bin)(taken master
> code, build and added Apache.Ignite core.dll into my Application Project)
> because due Application in production every time server on the start/stop
> loading data taking more time. but not able to start the Server with disk
> cache data.
>
> Once I started Server and client and loaded data after stopped when
> starting
> the server with existing disk data getting the following error in the log.
>
> So I am Not able to start Server with disk data.
>
> here the issue but it already closed and  Resolved
>   .
>
> GitHub 
>
> Configuration File
> -
> server-config.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1379/server-config.xml>
>
>
> here is the log file
> 
> ignite.log
> 
>
> So whether this issue still there or not, whether I can use or not still I
> have to wait for the release or any other way. Any configuration I need to
> configure to Start Server with existing data.
>
> please give suggestion to me.
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Declaring server side CacheEntryListenerin Ignte config

2019-07-16 Thread Jean-Philippe Laroche
I saw many examples on how to programmatically declare a
CacheEntryListenerfrom a client application, but is there a way to register
by configuration, a  CacheEntryListener so it is active on node/cluster
startup?


Re: Ignite DB Issues

2019-07-16 Thread Andrei Aleksandrov

Hi,

There are not enough details in your message.

1. I have 10 records of CSV and stored in Ignite DB then ten records 
will be stored along with new table creation. Now I have removed drop 
table code from my java code and removed table creation code and running 
the java code. It is not updating in Ignite DB table records.


Can you share your java code and cluster configurations? How you try to 
update the tables in Ignite?


2. Why Ignite DB always showing four columns of a table?

I guess that you said about SQL select table. It will show only the 
fields that you set in CREATE_TABLE command or in QUERY_ENTITY in your 
cache configuration.


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/QueryEntity.html
https://apacheignite-sql.readme.io/docs/create-table

BR,
Andrei

http://apache-ignite-users.70518.x6.nabble.com/Ignite-DB-Issues-td28836.html

On 2019/07/15 01:44:45, anji m  wrote:
> Hi Team,>
>
>
> 1. I have 10 records of CSV and stored in Ignite DB then ten records 
will>

> be stored along with new table creation. Now I>
> have removed drop table code from my java code and removed table 
creation>

> code and running the java code. It is not updating>
> in Ignite DB table records.>
>
> 2. Why Ignite DB always showing four columns of table?>
> -- >
> *Thanks&Regards*>
> *Anji M*>
> *M:+1 (267) 916 2969*>
>