Re: how ignite distribute sql query to all nodes when use REPLICATED mode

2018-10-31 Thread kcheng.mvp
I found the document here


https://apacheignite-sql.readme.io/docs/select


*SELECT queries can be executed against both replicated and partitioned
data. When executed against fully replicated data, Ignite will send a query
it to a single cluster node and run it over the local data there. On the
other hand, if a query is executed over partitioned data, then the execution
flow will be the following: The query will be parsed and split into multiple
map queries and a single reduce query. All the map queries are executed on
all the nodes where required data resides. All the nodes provide result sets
of local execution to the query initiator (reducer) that, in turn, will
accomplish the reduce phase by properly merging provided result sets.*


Seems for replicated data, it always executes on a single node. then in this
case is there a way to speed up the query by let all the nodes runs in
parellel?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


how ignite distribute sql query to all nodes when use REPLICATED mode

2018-10-31 Thread kcheng.mvp
Suppose right now the cluster has 3 nodes, and all cases are in  REPLICATED
mode, that means every node has full set of the data.

then in this case how ignite distribute the sql query to all nodes to speed
up the query?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Does ignite provide a Comparator for Sort?

2018-10-31 Thread Ignite Enthusiast
I am new to Apache ignite.  I have used HAzelcast extensively and one of the 
features I really liked about it is the Comparator that it provides on the 
Cache Entries.
Does Apache Ignite have one readily available? If not, is it in the works?



Re: Comparison between java 8 streams functionality and Apache Ignite

2018-10-31 Thread gsaxena888
I've been thinking about this some more: I think the ignite solution is
nearly perfect, *if* the reduce operation runs within every node (so that,
for example, the results of ~96 threads on one google compute enginer were
reduced/summarized to a single value) and then either a single final
reduction occurrs on one node (eg a leader node?) or, if we want to get
fancy, the last reduction can occur in parallel accross the nodes (but it's
not clear whether doing a parallel reduction accross al the nodes is a
performant in the majority of real-life cases given the overhead of doing
so). So the question is: does the "reduce" operation run first within each
node? If not, wouldn't it make sense to do so (to minimize transferring data
to a single node to do a giant reduction and to also make good use of all
the nodes' cores?) And if not, is there an alternative way of achieving this
in Ignite (ie what do developers do?)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unable to load more than 5g data through sqlline

2018-10-31 Thread wt
a better option would be to drop sqlline and write your own client that reads
the csv files and loads into the database. This way you can have multiple
threads loading multiple files concurrently and each load you setup the
parameters for the streamer including batch sizes and flush frequency. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to handle dataregion out of memmory gracefully

2018-10-31 Thread wt
Thanks for this but what concerns me is if i setup several data regions and
one of them happens to be a play area i guess for data scientists and they
hit the memory limit of that region then the whole server goes down. in the
case for a cluster all the nodes hosting that region will go down. I realise
that we should put in place checks and monitoring but it sure would be nice
if the server just threw an exception to any client loading and stayed
online. Maybe a feature to think about.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Comparison between java 8 streams functionality and Apache Ignite

2018-10-31 Thread gsaxena888
Ahh, I see. But still, even with "map reduce" stratedgy, I *think* that there
is only a SINGLE node which will do the reduction, right? As in, the
reduction will NOT occur in parallel across nodes, right? (In fact, it
sounds like the reduction won't even occur in parallel *within* a node,
right?) If correct, are there any plans to change this to be more like
Infinispan (see this article from 2014:
https://blog.infinispan.org/2014/02/mapreduce-parallel-execution.html -- I
think they've made improvements since then.) Also, could this single-reducer
explain why one user complained about slow group-by? (see here:
http://apache-ignite-users.70518.x6.nabble.com/Slow-Group-By-td20236.html ) 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


IGNITE_EXCHANGE_HISTORY_SIZE value for Ignite client

2018-10-31 Thread Cristian Bordei
Hello,
We are using Ignite 2.6.0 and we notice on our java app configured as Ignite 
client that the memory is increasing too much (approx 5GB). We succeeded to fix 
this by setting the value ofIGNITE_EXCHANGE_HISTORY_SIZE to a lower size than 
1000.

My question is which will be the impact if we set the value of 
IGNITE_EXCHANGE_HISTORY_SIZE to 0 on the java clients side.
Our configuration is as follow:- two Ignite servers in cluster mode. 128 GB ram 
and 8 core cpu- 20 java apps configured as Ignite clients.IgniteConfiguration 
igniteConfiguration = new 
IgniteConfiguration();igniteConfiguration.setClientMode(true);
Our use case suppose to destroy all 400 caches every day and recreate another 
ones back again. The reuse of the cache is not possible.
Thank you,Cristi
Sent from Yahoo Mail on Android

destroy cache holding residual metadata in memory (2.7)

2018-10-31 Thread wt
i am testing code and part of my tests is adding\removing tables. In one of
the tests i add a table then destroy it and add it again but with an
additional column. When i try load the table i am getting a data type
mismatch and it is referring to the previous version of the table

in the work directory there is a folder for the original table but it is
empty. here is the error i get when trying to flush the loader. If i stop
and start ignite then adding the table with the new column data type works
so there is some residual metadata that isn't being cleaned up by
destroycache of the client.

Apache.Ignite.Core.Binary.BinaryObjectException
  HResult=0x80131500
  Message=Binary type has different field types
[typeName=Tables.csvCurrencyRates, fieldName=id, fieldTypeName1=UUID,
fieldTypeName2=String]
  Source=Apache.Ignite.Core
  StackTrace:
   at Apache.Ignite.Core.Impl.PlatformJniTarget.InStreamOutLong(Int32 type,
Action`1 writeAction)
   at Apache.Ignite.Core.Impl.PlatformTargetAdapter.DoOutOp(Int32 type,
Action`1 action)
   at Apache.Ignite.Core.Impl.Binary.Marshaller.FinishMarshal(BinaryWriter
writer)
   at Apache.Ignite.Core.Impl.PlatformJniTarget.InStreamOutLong(Int32 type,
Action`1 writeAction)
   at Apache.Ignite.Core.Impl.PlatformTargetAdapter.DoOutOp(Int32 type,
Action`1 action)
   at Apache.Ignite.Core.Impl.Datastream.DataStreamerImpl`2.Update(Action`1
action)
   at
Apache.Ignite.Core.Impl.Datastream.DataStreamerBatch`2.Send(DataStreamerImpl`2
ldr, Int32 plc)
   at
Apache.Ignite.Core.Impl.Datastream.DataStreamerImpl`2.Flush0(DataStreamerBatch`2
curBatch, Boolean wait, Int32 plc)
   at Apache.Ignite.Core.Impl.Datastream.DataStreamerImpl`2.Flush()
   at ClusterTool.classes.DbProviderCSV.StreamCsvData(List`1 headers, String
tablename, ICsvLine[] lines, Object _class, Type type, IIgnite igniteclient,
Boolean hashashid, CsvNewId csvid) in
C:\temp\IgniteTool\ClusterTool\classes\DbProviderCSV.cs:line 236
   at ClusterTool.classes.DbProviderCSV.LoadFromCsvFiles(String tablename)
in C:\temp\IgniteTool\ClusterTool\classes\DbProviderCSV.cs:line 104
   at ClusterTool.DataLoaderForm.<>c__DisplayClass7_0.b__0() in
C:\temp\IgniteTool\ClusterTool\DataLoaderForm.cs:line 94
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext
executionContext, ContextCallback callback, Object state, Boolean
preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext
executionContext, ContextCallback callback, Object state, Boolean
preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext
executionContext, ContextCallback callback, Object state)
   at System.Threading.ThreadHelper.ThreadStart()

Inner Exception 1:
JavaException: class org.apache.ignite.binary.BinaryObjectException: Binary
type has different field types [typeName=Tables.csvCurrencyRates,
fieldName=id, fieldTypeName1=UUID, fieldTypeName2=String]
at
org.apache.ignite.internal.binary.BinaryUtils.mergeMetadata(BinaryUtils.java:1047)
at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.addMeta(CacheObjectBinaryProcessorImpl.java:480)
at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$2.addMeta(CacheObjectBinaryProcessorImpl.java:207)
at
org.apache.ignite.internal.binary.BinaryContext.updateMetadata(BinaryContext.java:1332)
at
org.apache.ignite.internal.processors.platform.PlatformContextImpl.processMetadata(PlatformContextImpl.java:336)
at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutLong(PlatformBinaryProcessor.java:70)
at
org.apache.ignite.internal.processors.platform.PlatformAbstractTarget.processInStreamOutLong(PlatformAbstractTarget.java:87)
at
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutLong(PlatformTargetProxyImpl.java:67)





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unable to load more than 5g data through sqlline

2018-10-31 Thread Павлухин Иван
Hi Debashis,

Is sqlline started on the same machine? Perhaps sqlline ate all the
available memory
but the system decided to kill Ignite. Could you split incoming data into
relatively
small chunks and try it out?

вт, 30 окт. 2018 г. в 23:07, debashissinha :

> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_005158.jpg>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_005224.jpg>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_004447.jpg>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_004532.jpg>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_004559.jpg>
>
>
> Hi All,
> I would request help from anyone who can assist me in a critical error.
> I am trying to benchmark ignite on single node without tcp discovery
> enabled
> with tpcds benchmark data.
> For this I am using customer table(image attached) and loading 5 gb of csv
> file through sql line.
>
> The config( image attached) is for 20 gb of default data region with wal
> mode none and eviction mode is only lru . I have also enabled native
> persistence . Also in my ignite.sh script I am adding G1GC option in jvm
> opts.
>
> After almost 1.63 gb of data getting inserted and which corresponds to
> roughly 1200 rows of data ignite is silently restarting giving an error
> kill ignite.sh line 181 with jvm opts. The error for this is attached.
>
> I am having the following config
> no of cpus 2
> heap memory 1 gb
> data region max size(off heap size) 20 gb.
> Cluster mode enabled.
>
> Can some one kindly advise me where I am going wrong.
>
> Thanks & Regards
> Debashis Sinha
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Ivan Pavlukhin


Re: how to handle dataregion out of memmory gracefully

2018-10-31 Thread Павлухин Иван
Hi Wayne,

You can see a message written to a console during Ignite startup with
a calculated amount of memory required for a server. It looks as follows:

Nodes started on local machine require more than 80% of physical RAM
what can lead to significant slowdown due to swapping (please decrease
JVM heap size, data region size or checkpoint buffer size)
[required=694MB, available=996MB]

As already said you should clearly understand that your server does not
require more memory than available. I believe that it is a responsibility of
a server administrator to prevent a memory exhaustion. Also I could suggest
to configure enough swap space and a monitoring which will notify admin
when the system begins swapping.


пн, 29 окт. 2018 г. в 9:50, Ilya Kasnacheev :

> Hello!
>
> I'm afraid that Ignite is not usable currently after suffering Out Of
> Memory error. You should be careful to prevent that from happening.
>
> Currently there is no graceful way of dealing with it.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вс, 28 окт. 2018 г. в 11:58, wt :
>
>> in testing i managed to exceed a data regions space and this region is
>> memory
>> only. When this happens an unhandled exception is thrown from the
>> underlying
>> ignite dlls and the process crashes. How can i handle this gracefully
>> without losing the server?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

-- 
Best regards,
Ivan Pavlukhin


spring XML configuration to connect ignite database using org.apache.ignite.IgniteJdbcThinDriver

2018-10-31 Thread Malashree
spring XML configuration to connect ignite database using
org.apache.ignite.IgniteJdbcThinDriver



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 1 server - many Clients scenario failing

2018-10-31 Thread kommrad homer
I think the "10.20.228.11:47100" is not a public IP 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Comparison between java 8 streams functionality and Apache Ignite

2018-10-31 Thread Ilya Kasnacheev
Hello!

String::length will be run on all nodes, but Integer::intValue will be run
locally.

If you want it to be smarter than that, you could use MapReduce & ForkJoin:
https://apacheignite.readme.io/docs/compute-tasks

Regards,
-- 
Ilya Kasnacheev


вт, 30 окт. 2018 г. в 22:02, gsaxena888 :

> I'm new to Apache Ignite, but a long-time user of jdk8 streams. (And I've
> used Google Cloud Dataflow.) I'm trying to understand the example described
> in latest doc:
> https://apacheignite.readme.io/docs/distributed-closures#apply-methods
> 
> and comparing it to jdk8 stream functionality.
>
> Here's the code snippet from the Ignite doc:
> ```
> IgniteCompute compute  = ignite.compute();
>
> // Execute closure on all cluster nodes.
> Collection res = compute.apply(
> String::length,
> Arrays.asList("How many characters".split(" "))
> );
>
> // Add all the word lengths received from cluster nodes.
> int total = res.stream().mapToInt(Integer::intValue).sum();
> ```
>
> My question is: does the "res.stream.mapToInt()" *also* supposed to run on
> *all* nodes in the cluster or does it only run only on the local node? (The
> doc clealy specifies that the "compute.apply" runs on all nodes, but is
> silent on the res.stream() portion.) I'm assuming that Ignite works
> similarily to Infinispan, which I *think* will run any streaming logic
> (even
> ones with custom collectors) on *all* nodes (see
>
> https://blog.infinispan.org/2018/01/improving-collect-for-distributed-java.html
> <
> https://blog.infinispan.org/2018/01/improving-collect-for-distributed-java.html>
>
> )
>
> (Also, assuming the res.stream.mapToInt... logic runs on all nodes, is it
> reasonable to assume that *within* each node it will run in parallel, and
> that the number of threads *within* each node is equal to the number of
> cores for that node?)
>
> Also, if nodes differ in number of cores (or in current utilization), does
> Ingnite take that into when distributing the work? If so, I'm assuming in
> the above example that it makes that decision *not* in the res.stream()
> part
> but instead in the *initial* "compute.apply" section, and that it then
> tries
> to maintain the data partitioning?
>
> Finally, assuming that res.stream.mapToInt() does run on all nodes, would
> my
> existing jdk8 custom collectors (that follow the jdk8 streaming api for
> collectors) also *automatically* work on all nodes or would I need to do
> some minor refactoring as implied by the Infinispan doc (by using
> Infinispan's "SerializableSupplier>." instead of just
> "Supplier<...>"? (In particular, is Ignite smart enough to first do the
> collection *within* a node, and then start collecting across nodes? And,
> although this is not that important for my needs, is Ignite even smart
> enough to do the collection in a "logical" order between nodes, ie collect
> first within a node, then collect between "close" nodes, then do the final
> collection between the furthest nodes (eg on different racks and then next
> even from different data centers)? (This way, we minimize data transfer
> etc?))
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: 1 server - many Clients scenario failing

2018-10-31 Thread Ilya Kasnacheev
Hello!

Oct 31, 2018 10:05:14 AM org.apache.ignite.logger.java.JavaLogger warning
WARNING: Connect timed out (consider increasing 'failureDetectionTimeout'
configuration property) [addr=/10.20.228.11:47100,
failureDetectionTimeout=1]

Are you sure that port 47100 is not filtered in client -> server direction?

Regards,
-- 
Ilya Kasnacheev


ср, 31 окт. 2018 г. в 10:51, kommrad homer :

> Hey everyone
>
> I'm trying to implement such scenario below :
>
> One Node running on a VPS , Server Mode , Persistance Enabled
> Several Nodes running on other VPSs , Client Mode
>
> I will be using this for Bucket4j Rate Limiting implementation . We will
> use
> bucket4j for limiting the workload of some entity in our domain , not
> actually the request rates on the API itself.
>
> What I have done :
> I've started the Server Node successfully
> I've started another Node from my local machine , on Client Mode , calling
> "Ignition.start("myconfig")"
>
> After I call Ignition.start("myconfig"):
> I can see the Topology Change on the Server Node Logs , from "servers:1
> clients:0" to "server:1 clients:1"
> But the Code on my local machine freezes at "Ignition.start("myconfig") ,
> it
> won't get to the next command
>
> """
> try(Ignite ignite = Ignition.start("myconfig")) {
>
> System.out.println("A");
> ...
> ...
> }
> """
>
> it cannot print "AAA" . But when i shutdown the "Server Node" , my
> "Clint Node"  starts to print Error Logs about not being able to connect to
> the Server Node.
>
> Below are my Server Node config and  logs:
>
> """
> *ServerConfig:*
>
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd";>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>
> 
> 
>
> *ServerLog:*
>
> Ignite Command Line Startup, ver. 2.6.0#20180710-sha1:669feacc
> 2018 Copyright(C) Apache Software Foundation
>
> [07:04:34,607][INFO][main][IgniteKernal]
>
> >>>__  
> >>>   /  _/ ___/ |/ /  _/_  __/ __/
> >>>  _/ // (7 7// /  / / / _/
> >>> /___/\___/_/|_/___/ /_/ /___/
> >>>
> >>> ver. 2.6.0#20180710-sha1:669feacc
> >>> 2018 Copyright(C) Apache Software Foundation
> >>>
> >>> Ignite documentation: http://ignite.apache.org
>
> [07:04:34,610][INFO][main][IgniteKernal] Config URL:
> file:/root/apache-ignite-fabric-2.6.0-bin/config/yigit-config2.xml
> [07:04:34,647][INFO][main][IgniteKernal] IgniteConfiguration
> [igniteInstanceName=null, pubPoolSize=8, svcPoolSize=8, callbackPoolSize=8,
> stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, igfsPoolSize=2,
> dataStreamerPoolSize=8, utilityCachePoolSize=8,
> utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=8,
> igniteHome=/root/apache-ignite-fabric-2.6.0-bin,
> igniteWorkDir=/root/apache-ignite-fabric-2.6.0-bin/work,
> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e,
> nodeId=b0667881-b853-468e-99d1-41978fc1b10b,
> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@498d318c,
> marshLocJobs=false, daemon=false, p2pEnabled=true, netTimeout=5000,
> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0,
> marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=60,
> forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null],
> segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true,
> allResolversPassReq=true, segChkFreq=1, commSpi=TcpCommunicationSpi
> [connectGate=null, connPlc=null, enableForcibleNodeKill=false,
> enableTroubleshootingLog=false,
>
> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@32eff876
> ,
> locAddr=null, locHost=null, locPort=47100, locPortRange=100, shmemPort=-1,
> directBuf=true, directSndBuf=false, idleConnTimeout=60,
> connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
> sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, nioSrvr=null,
> shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
> tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32,
> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1,
> boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
> ctxInitLatch=java.util.concurrent.CountDownLatch@8dbdac1[Count = 1],
> stopping=false,
>
> metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@6e20b53a
> ],
> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@71809907,
> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@47af7f3d,
> addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
> txCfg=org.apache.ignite.co

Re: how to do i hibernate configuration for Apache ignite database

2018-10-31 Thread Malashree
https://apacheignite-mix.readme.io/docs/hibernate-l2-cache

Above given link is about hibernat-l2-cahe.


My hibernate configuration is using ignite database as given below




 







com.tech.entities.Login
  
com.tech.entities.Domain
  
com.tech.entities.Partner
  




org.hibernate.dialect.MySQL5Dialect

true
update






Hibernate.dialect  i am using is org.hibernate.dialect.MySQL5Dialect , I
need the hibernate.dialect of apache ignite database to connect using
org.apache.ignite.IgniteJdbcThinDriver.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


1 server - many Clients scenario failing

2018-10-31 Thread kommrad homer
Hey everyone

I'm trying to implement such scenario below :

One Node running on a VPS , Server Mode , Persistance Enabled
Several Nodes running on other VPSs , Client Mode 

I will be using this for Bucket4j Rate Limiting implementation . We will use
bucket4j for limiting the workload of some entity in our domain , not
actually the request rates on the API itself.

What I have done :
I've started the Server Node successfully 
I've started another Node from my local machine , on Client Mode , calling
"Ignition.start("myconfig")"

After I call Ignition.start("myconfig"):
I can see the Topology Change on the Server Node Logs , from "servers:1
clients:0" to "server:1 clients:1"
But the Code on my local machine freezes at "Ignition.start("myconfig") , it
won't get to the next command 

"""
try(Ignite ignite = Ignition.start("myconfig")) {

System.out.println("A");
...
...
}
"""

it cannot print "AAA" . But when i shutdown the "Server Node" , my
"Clint Node"  starts to print Error Logs about not being able to connect to
the Server Node.

Below are my Server Node config and  logs:

"""
*ServerConfig:*

http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>







*ServerLog:*

Ignite Command Line Startup, ver. 2.6.0#20180710-sha1:669feacc
2018 Copyright(C) Apache Software Foundation

[07:04:34,607][INFO][main][IgniteKernal] 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 2.6.0#20180710-sha1:669feacc
>>> 2018 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

[07:04:34,610][INFO][main][IgniteKernal] Config URL:
file:/root/apache-ignite-fabric-2.6.0-bin/config/yigit-config2.xml
[07:04:34,647][INFO][main][IgniteKernal] IgniteConfiguration
[igniteInstanceName=null, pubPoolSize=8, svcPoolSize=8, callbackPoolSize=8,
stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, igfsPoolSize=2,
dataStreamerPoolSize=8, utilityCachePoolSize=8,
utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=8,
igniteHome=/root/apache-ignite-fabric-2.6.0-bin,
igniteWorkDir=/root/apache-ignite-fabric-2.6.0-bin/work,
mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e,
nodeId=b0667881-b853-468e-99d1-41978fc1b10b,
marsh=org.apache.ignite.internal.binary.BinaryMarshaller@498d318c,
marshLocJobs=false, daemon=false, p2pEnabled=true, netTimeout=5000,
sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0,
marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=60,
forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null],
segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true,
allResolversPassReq=true, segChkFreq=1, commSpi=TcpCommunicationSpi
[connectGate=null, connPlc=null, enableForcibleNodeKill=false,
enableTroubleshootingLog=false,
srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@32eff876,
locAddr=null, locHost=null, locPort=47100, locPortRange=100, shmemPort=-1,
directBuf=true, directSndBuf=false, idleConnTimeout=60,
connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, nioSrvr=null,
shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32,
unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1,
boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
ctxInitLatch=java.util.concurrent.CountDownLatch@8dbdac1[Count = 1],
stopping=false,
metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@6e20b53a],
evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@71809907,
colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@47af7f3d,
addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
txCfg=org.apache.ignite.configuration.TransactionConfiguration@7c729a55,
cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED,
p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100,
timeSrvPortRange=100, failureDetectionTimeout=1,
clientFailureDetectionTimeout=3, metricsLogFreq=6, hadoopCfg=null,
connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@3bb9a3ff,
odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
[seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null,
binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=DataStorageConfigu