Re: Checkpoint buffer error

2020-04-15 Thread krkumar24061...@gmail.com
Hi - Here is the complete stack trace from the logs

[2020-04-15 13:43:07,271][ERROR][data-streamer-stripe-2-#51][root] Failed to
set initial value for cache entry: DataStreamerEntry
[key=UserKeyCacheObjectImpl [part=207, val=50792583101, hasValBytes=true],
val=CacheObjectByteArrayImpl [arrLen=403]]
class
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
Runtime failure on search row: SearchRow [key=KeyCacheObjectImpl [part=207,
val=50792583101, hasValBytes=true], hash=-747024458, cacheId=0]
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1815)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1642)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1625)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:1935)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:428)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4248)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:3391)
at
org.apache.ignite.internal.processors.cache.GridCacheEntryEx.initialValue(GridCacheEntryEx.java:766)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:2265)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6820)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteException: Failed to allocate
temporary buffer for checkpoint (increase checkpointPageBufferSize
configuration property): default
at
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.postWriteLockPage(PageMemoryImpl.java:1575)
at
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.tryWriteLockPage(PageMemoryImpl.java:1546)
at
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.tryWriteLock(PageMemoryImpl.java:479)
at
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeLock(PageHandler.java:400)
at
org.apache.ignite.internal.processors.cache.persistence.DataStructure.tryWriteLock(DataStructure.java:161)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.writeLockPage(PagesList.java:1021)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.put(PagesList.java:641)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList$WriteRowHandler.run(AbstractFreeList.java:164)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList$WriteRowHandler.run(AbstractFreeList.java:136)
at
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:279)
at
org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:296)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.insertDataRow(AbstractFreeList.java:500)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.insertDataRow(CacheFreeListImpl.java:59)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.insertDataRow(CacheFreeListImpl.java:35)
at
org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:103)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1695)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.createRow(GridCacheOffheapManager.java:1910)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5701)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5643)
at

Re: Internal Node and External Node

2020-04-15 Thread Evgenii Zhuravlev
Hi,

What do you mean by "external" and "internal"? Why do you want to have 2
different nodes on the same machine for that?

Evgenii

ср, 15 апр. 2020 г. в 09:54, Anthony :

> Hello,
>
> We have a use case where we want two nodes within a machine. One for
> internal and one for external. To be more precise, the c++ node will handle
> the communication and java node will handle the communication.
> Is there a proper way to organize the nodes?
>
> Thanks,
>
> Anthony
>


Regarding EVT_NODE_SEGMENTED event

2020-04-15 Thread VeenaMithare
Hi , 

1. Do we always get EVT_NODE_SEGMENTED event whenever after a node gets
SEGMENTED ? I see the below  in the javadocs for SegmentationPolicy : 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/plugin/segmentation/SegmentationPolicy.html


*STOP*
public static final SegmentationPolicy STOP
When segmentation policy is STOP, *all listeners will receive
EventType.EVT_NODE_SEGMENTED *event and then particular grid node will be
stopped via call to Ignition.stop(String, boolean).
*NOOP*
public static final SegmentationPolicy NOOP
When segmentation policy is NOOP, *all listeners will receive
EventType.EVT_NODE_SEGMENTED* event and it is up to user to implement logic
to handle this event.

For eg : If a node has been identified as segmented because of GC pause,
will it also get a EVT_NODE_SEGMENTED event ?

REGARDS,
Veena.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Failed to reconnect to cluster (will retry): class o.a.i.IgniteCheckedException: Failed to deserialize object with given class loader: org.spr

2020-04-15 Thread Rajan Ahlawat
I tried IPv4 on client side, but of no success.
On server side we can't change and restart.

On Thu, Apr 16, 2020 at 12:21 AM Evgenii Zhuravlev
 wrote:
>
> I see that you use both ipv4 and ipv6 for some nodes, there is a known issue 
> with this. I would recommend to restrict Ignite to IPv4 via the 
> -Djava.net.preferIPv4Stack=true JVM parameter for all nodes in cluster, 
> including clients. I've seen communication issues with this before.
>
> Evgenii
>
> ср, 15 апр. 2020 г. в 11:31, Rajan Ahlawat :
>>
>> Client logs and stack_trace is shared.
>> Client just keep trying to connect and server keep throwing socket timeout.
>> Stack trace I gave is what I get when I try to connect to this
>> problematic ignite server and caught this stack trace.
>>
>> About this default settings, on our environment we do have only
>> default timeouts, though we tried increasing all these timeouts on
>> client side, but of no success.
>> On server side right now, we can't tweak these timeouts value, unless
>> we are sure of fix.
>>
>>
>> On Wed, Apr 15, 2020 at 8:06 PM Evgenii Zhuravlev
>>  wrote:
>> >
>> > Hi,
>> >
>> > Please provide logs not only from the server node, bu from the client node 
>> > too. You mentioned that only one client has this problems, so, please 
>> > provide full log from this node.
>> >
>> > Also, you said that you set not default timeouts for clients, while there 
>> > are still default values for server node - I wouldn't recommend to do 
>> > this, timeouts should be the same for all nodes in cluster.
>> >
>> > Evgenii
>> >
>> > ср, 15 апр. 2020 г. в 03:04, Rajan Ahlawat :
>> >>
>> >> Shared file with email-id:
>> >> e.zhuravlev...@gmail.com
>> >>
>> >> We have single instance of ignite, File contains all log of date Mar
>> >> 30, 2019. Line 6429 is the first incident of occurrence.
>> >>
>> >> On Tue, Apr 14, 2020 at 8:27 PM Evgenii Zhuravlev
>> >>  wrote:
>> >> >
>> >> > Can you provide full log files from all nodes? it's impossible to find 
>> >> > the root cause from this.
>> >> >
>> >> > Evgenii
>> >> >
>> >> > вт, 14 апр. 2020 г. в 07:49, Rajan Ahlawat :
>> >> >>
>> >> >> server starts with following configuration:
>> >> >>
>> >> >> ignite_application-1-2020-03-17.log:14:[2020-03-17T08:23:33,664][INFO
>> >> >> ][main][IgniteKernal%igniteStart] IgniteConfiguration
>> >> >> [igniteInstanceName=igniteStart, pubPoolSize=32, svcPoolSize=32,
>> >> >> callbackPoolSize=32, stripedPoolSize=32, sysPoolSize=30,
>> >> >> mgmtPoolSize=4, igfsPoolSize=32, dataStreamerPoolSize=32,
>> >> >> utilityCachePoolSize=32, utilityCacheKeepAliveTime=6,
>> >> >> p2pPoolSize=2, qryPoolSize=32,
>> >> >> igniteHome=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin,
>> >> >> igniteWorkDir=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin/work,
>> >> >> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e,
>> >> >> nodeId=53396cb7-1b66-43da-bf10-ebb5f7cc9693,
>> >> >> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@42b3b079,
>> >> >> marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
>> >> >> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
>> >> >> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
>> >> >> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0,
>> >> >> marsh=null, reconCnt=100, reconDelay=1, maxAckTimeout=60,
>> >> >> forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null],
>> >> >> segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true,
>> >> >> allResolversPassReq=true, segChkFreq=1,
>> >> >> commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
>> >> >> enableForcibleNodeKill=false, enableTroubleshootingLog=false,
>> >> >> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@6692b6c6,
>> >> >> locAddr=null, locHost=null, locPort=47100, locPortRange=100,
>> >> >> shmemPort=-1, directBuf=true, directSndBuf=false,
>> >> >> idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60,
>> >> >> reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=1024,
>> >> >> slowClientQueueLimit=1000, nioSrvr=null, shmemSrv=null,
>> >> >> usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true,
>> >> >> filterReachableAddresses=false, ackSndThreshold=32,
>> >> >> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null,
>> >> >> boundTcpPort=-1, boundTcpShmemPort=-1, selectorsCnt=16,
>> >> >> selectorSpins=0, addrRslvr=null,
>> >> >> ctxInitLatch=java.util.concurrent.CountDownLatch@1cd629b3[Count = 1],
>> >> >> stopping=false,
>> >> >> metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@589da3f3],
>> >> >> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@39d76cb5,
>> >> >> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
>> >> >> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@1cb346ea,
>> >> >> addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
>> >> >> 

Re: Local node terminated after segmentation

2020-04-15 Thread VeenaMithare
Thanks Evgenii 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDBC Connection Pooling

2020-04-15 Thread narges saleh
Hello Evgenii,

I am not sure what you mean by reuse a data streamer from multiple
threads.  I have data being constantly "streamed" to the streamer via a
connection. Are you saying have the data streamed to the streamer via
multiple connections, across multiple threads?
What if I have a persistent connection that sends data continuously? Should
I hold on the instance of the streamer (for a particular cache), or
recreate a new one once a new load of data arrives?

On Wed, Apr 15, 2020 at 1:17 PM Evgenii Zhuravlev 
wrote:

> > Should I create a pool of data streamers (a few for each cache)?
> If you use just KV API, it's better to have only one data streamer per
> cache and reuse it from multiple threads - it will give you the best
> performance.
>
> Evgenii
>
> ср, 15 апр. 2020 г. в 04:53, narges saleh :
>
>> Please note that in my case, the streamers are running on the server side
>> (as part of different services).
>>
>> On Wed, Apr 15, 2020 at 6:46 AM narges saleh 
>> wrote:
>>
>>> So, in effect, I'll be having a pool of streamers, right?
>>> Would this still be the case if I am using BinaryObjectBuilder to build
>>> objects to stream the data to a few caches? Should I create a pool of data
>>> streamers (a few for each cache)?
>>> I don't want to have to create a new object builder and data streamer if
>>> I am inserting to the same cache over and over.
>>>
>>> On Tue, Apr 14, 2020 at 11:56 AM Evgenii Zhuravlev <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
 For each connection, on node side will be created its own datastreamer.
 I think it makes sense to try pooling for data load, but you will need to
 measure everything, since the pool size depends on the lot of things

 вт, 14 апр. 2020 г. в 07:31, narges saleh :

> Yes, Evgenii.
>
> On Mon, Apr 13, 2020 at 10:06 PM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Hi,
>>
>> Do you use STREAMING MODE for thin JDBC driver?
>>
>> Evgenii
>>
>> пн, 13 апр. 2020 г. в 19:33, narges saleh :
>>
>>> Thanks Alex. I will study the links you provided.
>>>
>>> I read somewhere that jdbc datasource is available via Ignite JDBC,
>>> (which should provide connection pooling).
>>>
>>> On Mon, Apr 13, 2020 at 12:31 PM akorensh 
>>> wrote:
>>>
 Hi,
   At this point you need to implement connection pooling yourself.
   Use

 https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ClientConnectorConfiguration.html#setThreadPoolSize-int-
   to specify number of threads Ignite creates to service connection
 requests.

   Each new connection will be handled by a separate thread inside
 Ignite(maxed out a threadPoolSize - as described above)

   ClientConnectorConfiguration is set inside IgniteConfiguration:

 https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setClientConnectorConfiguration-org.apache.ignite.configuration.ClientConnectorConfiguration-

   More info:

 https://www.gridgain.com/docs/latest/developers-guide/SQL/JDBC/jdbc-driver#cluster-configuration

 Thanks, Alex



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>


Re: org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Failed to reconnect to cluster (will retry): class o.a.i.IgniteCheckedException: Failed to deserialize object with given class loader: org.spr

2020-04-15 Thread Evgenii Zhuravlev
I see that you use both ipv4 and ipv6 for some nodes, there is a known
issue with this. I would recommend to restrict Ignite to IPv4 via the
-Djava.net.preferIPv4Stack=true JVM parameter for all nodes in cluster,
including clients. I've seen communication issues with this before.

Evgenii

ср, 15 апр. 2020 г. в 11:31, Rajan Ahlawat :

> Client logs and stack_trace is shared.
> Client just keep trying to connect and server keep throwing socket timeout.
> Stack trace I gave is what I get when I try to connect to this
> problematic ignite server and caught this stack trace.
>
> About this default settings, on our environment we do have only
> default timeouts, though we tried increasing all these timeouts on
> client side, but of no success.
> On server side right now, we can't tweak these timeouts value, unless
> we are sure of fix.
>
>
> On Wed, Apr 15, 2020 at 8:06 PM Evgenii Zhuravlev
>  wrote:
> >
> > Hi,
> >
> > Please provide logs not only from the server node, bu from the client
> node too. You mentioned that only one client has this problems, so, please
> provide full log from this node.
> >
> > Also, you said that you set not default timeouts for clients, while
> there are still default values for server node - I wouldn't recommend to do
> this, timeouts should be the same for all nodes in cluster.
> >
> > Evgenii
> >
> > ср, 15 апр. 2020 г. в 03:04, Rajan Ahlawat :
> >>
> >> Shared file with email-id:
> >> e.zhuravlev...@gmail.com
> >>
> >> We have single instance of ignite, File contains all log of date Mar
> >> 30, 2019. Line 6429 is the first incident of occurrence.
> >>
> >> On Tue, Apr 14, 2020 at 8:27 PM Evgenii Zhuravlev
> >>  wrote:
> >> >
> >> > Can you provide full log files from all nodes? it's impossible to
> find the root cause from this.
> >> >
> >> > Evgenii
> >> >
> >> > вт, 14 апр. 2020 г. в 07:49, Rajan Ahlawat :
> >> >>
> >> >> server starts with following configuration:
> >> >>
> >> >> ignite_application-1-2020-03-17.log:14:[2020-03-17T08:23:33,664][INFO
> >> >> ][main][IgniteKernal%igniteStart] IgniteConfiguration
> >> >> [igniteInstanceName=igniteStart, pubPoolSize=32, svcPoolSize=32,
> >> >> callbackPoolSize=32, stripedPoolSize=32, sysPoolSize=30,
> >> >> mgmtPoolSize=4, igfsPoolSize=32, dataStreamerPoolSize=32,
> >> >> utilityCachePoolSize=32, utilityCacheKeepAliveTime=6,
> >> >> p2pPoolSize=2, qryPoolSize=32,
> >> >>
> igniteHome=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin,
> >> >>
> igniteWorkDir=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin/work,
> >> >> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e,
> >> >> nodeId=53396cb7-1b66-43da-bf10-ebb5f7cc9693,
> >> >> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@42b3b079,
> >> >> marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
> >> >> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
> >> >> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
> >> >> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0,
> ackTimeout=0,
> >> >> marsh=null, reconCnt=100, reconDelay=1, maxAckTimeout=60,
> >> >> forceSrvMode=false, clientReconnectDisabled=false,
> internalLsnr=null],
> >> >> segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true,
> >> >> allResolversPassReq=true, segChkFreq=1,
> >> >> commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
> >> >> enableForcibleNodeKill=false, enableTroubleshootingLog=false,
> >> >>
> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@6692b6c6
> ,
> >> >> locAddr=null, locHost=null, locPort=47100, locPortRange=100,
> >> >> shmemPort=-1, directBuf=true, directSndBuf=false,
> >> >> idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60,
> >> >> reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=1024,
> >> >> slowClientQueueLimit=1000, nioSrvr=null, shmemSrv=null,
> >> >> usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true,
> >> >> filterReachableAddresses=false, ackSndThreshold=32,
> >> >> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null,
> >> >> boundTcpPort=-1, boundTcpShmemPort=-1, selectorsCnt=16,
> >> >> selectorSpins=0, addrRslvr=null,
> >> >> ctxInitLatch=java.util.concurrent.CountDownLatch@1cd629b3[Count =
> 1],
> >> >> stopping=false,
> >> >>
> metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@589da3f3
> ],
> >> >>
> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@39d76cb5,
> >> >> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
> >> >>
> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@1cb346ea,
> >> >> addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
> >> >>
> txCfg=org.apache.ignite.configuration.TransactionConfiguration@4c012563,
> >> >> cacheSanityCheckEnabled=true, discoStartupDelay=6,
> >> >> deployMode=SHARED, p2pMissedCacheSize=100, locHost=null,
> >> >> timeSrvPortBase=31100, timeSrvPortRange=100,
> >> >> 

Re: org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Failed to reconnect to cluster (will retry): class o.a.i.IgniteCheckedException: Failed to deserialize object with given class loader: org.spr

2020-04-15 Thread Rajan Ahlawat
Client logs and stack_trace is shared.
Client just keep trying to connect and server keep throwing socket timeout.
Stack trace I gave is what I get when I try to connect to this
problematic ignite server and caught this stack trace.

About this default settings, on our environment we do have only
default timeouts, though we tried increasing all these timeouts on
client side, but of no success.
On server side right now, we can't tweak these timeouts value, unless
we are sure of fix.


On Wed, Apr 15, 2020 at 8:06 PM Evgenii Zhuravlev
 wrote:
>
> Hi,
>
> Please provide logs not only from the server node, bu from the client node 
> too. You mentioned that only one client has this problems, so, please provide 
> full log from this node.
>
> Also, you said that you set not default timeouts for clients, while there are 
> still default values for server node - I wouldn't recommend to do this, 
> timeouts should be the same for all nodes in cluster.
>
> Evgenii
>
> ср, 15 апр. 2020 г. в 03:04, Rajan Ahlawat :
>>
>> Shared file with email-id:
>> e.zhuravlev...@gmail.com
>>
>> We have single instance of ignite, File contains all log of date Mar
>> 30, 2019. Line 6429 is the first incident of occurrence.
>>
>> On Tue, Apr 14, 2020 at 8:27 PM Evgenii Zhuravlev
>>  wrote:
>> >
>> > Can you provide full log files from all nodes? it's impossible to find the 
>> > root cause from this.
>> >
>> > Evgenii
>> >
>> > вт, 14 апр. 2020 г. в 07:49, Rajan Ahlawat :
>> >>
>> >> server starts with following configuration:
>> >>
>> >> ignite_application-1-2020-03-17.log:14:[2020-03-17T08:23:33,664][INFO
>> >> ][main][IgniteKernal%igniteStart] IgniteConfiguration
>> >> [igniteInstanceName=igniteStart, pubPoolSize=32, svcPoolSize=32,
>> >> callbackPoolSize=32, stripedPoolSize=32, sysPoolSize=30,
>> >> mgmtPoolSize=4, igfsPoolSize=32, dataStreamerPoolSize=32,
>> >> utilityCachePoolSize=32, utilityCacheKeepAliveTime=6,
>> >> p2pPoolSize=2, qryPoolSize=32,
>> >> igniteHome=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin,
>> >> igniteWorkDir=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin/work,
>> >> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e,
>> >> nodeId=53396cb7-1b66-43da-bf10-ebb5f7cc9693,
>> >> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@42b3b079,
>> >> marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
>> >> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
>> >> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
>> >> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0,
>> >> marsh=null, reconCnt=100, reconDelay=1, maxAckTimeout=60,
>> >> forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null],
>> >> segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true,
>> >> allResolversPassReq=true, segChkFreq=1,
>> >> commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
>> >> enableForcibleNodeKill=false, enableTroubleshootingLog=false,
>> >> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@6692b6c6,
>> >> locAddr=null, locHost=null, locPort=47100, locPortRange=100,
>> >> shmemPort=-1, directBuf=true, directSndBuf=false,
>> >> idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60,
>> >> reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=1024,
>> >> slowClientQueueLimit=1000, nioSrvr=null, shmemSrv=null,
>> >> usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true,
>> >> filterReachableAddresses=false, ackSndThreshold=32,
>> >> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null,
>> >> boundTcpPort=-1, boundTcpShmemPort=-1, selectorsCnt=16,
>> >> selectorSpins=0, addrRslvr=null,
>> >> ctxInitLatch=java.util.concurrent.CountDownLatch@1cd629b3[Count = 1],
>> >> stopping=false,
>> >> metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@589da3f3],
>> >> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@39d76cb5,
>> >> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
>> >> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@1cb346ea,
>> >> addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
>> >> txCfg=org.apache.ignite.configuration.TransactionConfiguration@4c012563,
>> >> cacheSanityCheckEnabled=true, discoStartupDelay=6,
>> >> deployMode=SHARED, p2pMissedCacheSize=100, locHost=null,
>> >> timeSrvPortBase=31100, timeSrvPortRange=100,
>> >> failureDetectionTimeout=1, clientFailureDetectionTimeout=3,
>> >> metricsLogFreq=6, hadoopCfg=null,
>> >> connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@14a50707,
>> >> odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
>> >> [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
>> >> grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null,
>> >> binaryCfg=null, memCfg=null, pstCfg=null,
>> >> dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040,
>> >> 

Re: JDBC Connection Pooling

2020-04-15 Thread Evgenii Zhuravlev
> Should I create a pool of data streamers (a few for each cache)?
If you use just KV API, it's better to have only one data streamer per
cache and reuse it from multiple threads - it will give you the best
performance.

Evgenii

ср, 15 апр. 2020 г. в 04:53, narges saleh :

> Please note that in my case, the streamers are running on the server side
> (as part of different services).
>
> On Wed, Apr 15, 2020 at 6:46 AM narges saleh  wrote:
>
>> So, in effect, I'll be having a pool of streamers, right?
>> Would this still be the case if I am using BinaryObjectBuilder to build
>> objects to stream the data to a few caches? Should I create a pool of data
>> streamers (a few for each cache)?
>> I don't want to have to create a new object builder and data streamer if
>> I am inserting to the same cache over and over.
>>
>> On Tue, Apr 14, 2020 at 11:56 AM Evgenii Zhuravlev <
>> e.zhuravlev...@gmail.com> wrote:
>>
>>> For each connection, on node side will be created its own datastreamer.
>>> I think it makes sense to try pooling for data load, but you will need to
>>> measure everything, since the pool size depends on the lot of things
>>>
>>> вт, 14 апр. 2020 г. в 07:31, narges saleh :
>>>
 Yes, Evgenii.

 On Mon, Apr 13, 2020 at 10:06 PM Evgenii Zhuravlev <
 e.zhuravlev...@gmail.com> wrote:

> Hi,
>
> Do you use STREAMING MODE for thin JDBC driver?
>
> Evgenii
>
> пн, 13 апр. 2020 г. в 19:33, narges saleh :
>
>> Thanks Alex. I will study the links you provided.
>>
>> I read somewhere that jdbc datasource is available via Ignite JDBC,
>> (which should provide connection pooling).
>>
>> On Mon, Apr 13, 2020 at 12:31 PM akorensh 
>> wrote:
>>
>>> Hi,
>>>   At this point you need to implement connection pooling yourself.
>>>   Use
>>>
>>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ClientConnectorConfiguration.html#setThreadPoolSize-int-
>>>   to specify number of threads Ignite creates to service connection
>>> requests.
>>>
>>>   Each new connection will be handled by a separate thread inside
>>> Ignite(maxed out a threadPoolSize - as described above)
>>>
>>>   ClientConnectorConfiguration is set inside IgniteConfiguration:
>>>
>>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setClientConnectorConfiguration-org.apache.ignite.configuration.ClientConnectorConfiguration-
>>>
>>>   More info:
>>>
>>> https://www.gridgain.com/docs/latest/developers-guide/SQL/JDBC/jdbc-driver#cluster-configuration
>>>
>>> Thanks, Alex
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Internal Node and External Node

2020-04-15 Thread Anthony
Hello,

We have a use case where we want two nodes within a machine. One for
internal and one for external. To be more precise, the c++ node will handle
the communication and java node will handle the communication.
Is there a proper way to organize the nodes?

Thanks,

Anthony


Re: Transaction already completed errors

2020-04-15 Thread Evgenii Zhuravlev
This information can be found here :
https://apacheignite.readme.io/docs/multiversion-concurrency-control.
Probably it should be added to the documentation that you shared with me
too.

Evgenii

ср, 15 апр. 2020 г. в 08:09, Courtney Robinson :

> Thanks for letting me know.
> It's worth adding this to the docs it doesn't currently include any
> warning or notice that TRANSACTIONAL_SNAPSHOT isn't ready for production
> in
> https://apacheignite-sql.readme.io/docs/multiversion-concurrency-control
> Is there a set of outstanding tickets I can keep track of? Depending on
> time needed, we could potentially contribute to getting this released.
>
> Regards,
> Courtney Robinson
> Founder and CEO, Hypi
> Tel: ++44 208 123 2413 (GMT+0) 
>
> 
> https://hypi.io
>
>
> On Wed, Apr 15, 2020 at 3:26 PM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Hi Courtney,
>>
>> MVCC is not production ready yet, so, I wouldn't recommend using
>> TRANSACTIONAL_SNAPSHOT atomicity for now.
>>
>> Best Regards,
>> Evgenii
>>
>> ср, 15 апр. 2020 г. в 06:02, Courtney Robinson > >:
>>
>>> We're upgrading to Ignite 2.8 and are starting to use SQL tables. In all
>>> previous work we've used the key value APIs directly.
>>>
>>> After getting everything working, we're regularly seeing "transaction
>>> already completed" errors when executing SELECT queries. A stack trace is
>>> included at the end.
>>> All tables are created with
>>> "template=partitioned,backups=2,data_region=hypi,affinity_key=instanceId,atomicity=TRANSACTIONAL_SNAPSHOT"
>>>
>>> I found https://issues.apache.org/jira/browse/IGNITE-10763 which
>>> suggested the problem was fixed in 2.8 and "is caused by leaked tx stored
>>> in ThreadLocal".
>>>
>>> Has anyone else encountered this issue and is there a fix?
>>> Just to be clear, we're definitely not performing any
>>> insert/update/merge operations, only selects when this error occurs.
>>>
>>> From that issue I linked to, assuming the problem is still a leaked
>>> ThreadLocal is there any workaround for this?
>>> We have a managed thread pool (you can see Pool.java in the trace), I've
>>> tried not to use it but still get the error because I guess it's now just
>>> defaulting to Spring Boot's request thread pool.
>>>
>>>
>>> 2020-04-13 19:56:31.548 INFO 9 --- [io-1-exec-2] io.hypi.arc.os.gql.
 HypiGraphQLException : GraphQL error, path: null, source: null, msg:
 null javax.cache.CacheException: Transaction is already completed. at
 org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(
 IgniteCacheProxyImpl.java:820) at org.apache.ignite.internal.processors
 .cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:753) at org
 .apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.
 query(GatewayProtectedCacheProxy.java:424) at io.hypi.arc.os.ignite.
 IgniteRepo.findInstanceCtx(IgniteRepo.java:134) at io.hypi.arc.os.
 handlers.BaseHandler.evaluateQuery(BaseHandler.java:38) at io.hypi.arc.
 os.handlers.HttpHandler.lambda$runQuery$0(HttpHandler.java:145) at io.
 hypi.arc.base.Pool.apply(Pool.java:109) at io.hypi.arc.base.Pool.
 lambda$async$3(Pool.java:93) at com.google.common.util.concurrent.
 TrustedListenableFutureTask$TrustedFutureInterruptibleTask.
 runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.
 common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
 at com.google.common.util.concurrent.TrustedListenableFutureTask.run(
 TrustedListenableFutureTask.java:78) at java.base/java.util.concurrent.
 Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.
 util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.
 util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
 ScheduledThreadPoolExecutor.java:304) at java.base/java.util.concurrent
 .ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.
 base/java.util.concurrent.ThreadPoolExecutor$Worker.run(
 ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.
 java:834) Caused by: org.apache.ignite.transactions.
 TransactionAlreadyCompletedException: Transaction is already completed.
 at org.apache.ignite.internal.util.IgniteUtils$18.apply(IgniteUtils.
 java:991) at org.apache.ignite.internal.util.IgniteUtils$18.apply(
 IgniteUtils.java:989) at org.apache.ignite.internal.util.IgniteUtils.
 convertException(IgniteUtils.java:1062) at org.apache.ignite.internal.
 processors.query.h2.IgniteH2Indexing.executeSelect(IgniteH2Indexing.
 java:1292) at org.apache.ignite.internal.processors.query.h2.
 IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1117) at org.
 apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(
 GridQueryProcessor.java:2406) at org.apache.ignite.internal.processors.
 

Re: Transaction already completed errors

2020-04-15 Thread Courtney Robinson
Thanks for letting me know.
It's worth adding this to the docs it doesn't currently include any warning
or notice that TRANSACTIONAL_SNAPSHOT isn't ready for production in
https://apacheignite-sql.readme.io/docs/multiversion-concurrency-control
Is there a set of outstanding tickets I can keep track of? Depending on
time needed, we could potentially contribute to getting this released.

Regards,
Courtney Robinson
Founder and CEO, Hypi
Tel: ++44 208 123 2413 (GMT+0) 


https://hypi.io


On Wed, Apr 15, 2020 at 3:26 PM Evgenii Zhuravlev 
wrote:

> Hi Courtney,
>
> MVCC is not production ready yet, so, I wouldn't recommend using
> TRANSACTIONAL_SNAPSHOT atomicity for now.
>
> Best Regards,
> Evgenii
>
> ср, 15 апр. 2020 г. в 06:02, Courtney Robinson  >:
>
>> We're upgrading to Ignite 2.8 and are starting to use SQL tables. In all
>> previous work we've used the key value APIs directly.
>>
>> After getting everything working, we're regularly seeing "transaction
>> already completed" errors when executing SELECT queries. A stack trace is
>> included at the end.
>> All tables are created with
>> "template=partitioned,backups=2,data_region=hypi,affinity_key=instanceId,atomicity=TRANSACTIONAL_SNAPSHOT"
>>
>> I found https://issues.apache.org/jira/browse/IGNITE-10763 which
>> suggested the problem was fixed in 2.8 and "is caused by leaked tx stored
>> in ThreadLocal".
>>
>> Has anyone else encountered this issue and is there a fix?
>> Just to be clear, we're definitely not performing any insert/update/merge
>> operations, only selects when this error occurs.
>>
>> From that issue I linked to, assuming the problem is still a leaked
>> ThreadLocal is there any workaround for this?
>> We have a managed thread pool (you can see Pool.java in the trace), I've
>> tried not to use it but still get the error because I guess it's now just
>> defaulting to Spring Boot's request thread pool.
>>
>>
>> 2020-04-13 19:56:31.548 INFO 9 --- [io-1-exec-2] io.hypi.arc.os.gql.
>>> HypiGraphQLException : GraphQL error, path: null, source: null, msg:
>>> null javax.cache.CacheException: Transaction is already completed. at
>>> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(
>>> IgniteCacheProxyImpl.java:820) at org.apache.ignite.internal.processors.
>>> cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:753) at org.
>>> apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query
>>> (GatewayProtectedCacheProxy.java:424) at io.hypi.arc.os.ignite.
>>> IgniteRepo.findInstanceCtx(IgniteRepo.java:134) at io.hypi.arc.os.
>>> handlers.BaseHandler.evaluateQuery(BaseHandler.java:38) at io.hypi.arc.
>>> os.handlers.HttpHandler.lambda$runQuery$0(HttpHandler.java:145) at io.
>>> hypi.arc.base.Pool.apply(Pool.java:109) at io.hypi.arc.base.Pool.
>>> lambda$async$3(Pool.java:93) at com.google.common.util.concurrent.
>>> TrustedListenableFutureTask$TrustedFutureInterruptibleTask.
>>> runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.
>>> common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
>>> at com.google.common.util.concurrent.TrustedListenableFutureTask.run(
>>> TrustedListenableFutureTask.java:78) at java.base/java.util.concurrent.
>>> Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.
>>> util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.
>>> util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
>>> ScheduledThreadPoolExecutor.java:304) at java.base/java.util.concurrent.
>>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
>>> java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by:
>>> org.apache.ignite.transactions.TransactionAlreadyCompletedException:
>>> Transaction is already completed. at org.apache.ignite.internal.util.
>>> IgniteUtils$18.apply(IgniteUtils.java:991) at org.apache.ignite.internal
>>> .util.IgniteUtils$18.apply(IgniteUtils.java:989) at org.apache.ignite.
>>> internal.util.IgniteUtils.convertException(IgniteUtils.java:1062) at org
>>> .apache.ignite.internal.processors.query.h2.IgniteH2Indexing.
>>> executeSelect(IgniteH2Indexing.java:1292) at org.apache.ignite.internal.
>>> processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.
>>> java:1117) at org.apache.ignite.internal.processors.query.
>>> GridQueryProcessor$3.applyx(GridQueryProcessor.java:2406) at org.apache.
>>> ignite.internal.processors.query.GridQueryProcessor$3.applyx(
>>> GridQueryProcessor.java:2402) at org.apache.ignite.internal.util.lang.
>>> IgniteOutClosureX.apply(IgniteOutClosureX.java:36) at org.apache.ignite.
>>> internal.processors.query.GridQueryProcessor.executeQuery(
>>> GridQueryProcessor.java:2919) at org.apache.ignite.internal.processors.
>>> query.GridQueryProcessor.lambda$querySqlFields$1(GridQueryProcessor.java
>>> :2422) at 

Re: org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Failed to reconnect to cluster (will retry): class o.a.i.IgniteCheckedException: Failed to deserialize object with given class loader: org.spr

2020-04-15 Thread Evgenii Zhuravlev
Hi,

Please provide logs not only from the server node, bu from the client node
too. You mentioned that only one client has this problems, so, please
provide full log from this node.

Also, you said that you set not default timeouts for clients, while there
are still default values for server node - I wouldn't recommend to do this,
timeouts should be the same for all nodes in cluster.

Evgenii

ср, 15 апр. 2020 г. в 03:04, Rajan Ahlawat :

> Shared file with email-id:
> e.zhuravlev...@gmail.com
>
> We have single instance of ignite, File contains all log of date Mar
> 30, 2019. Line 6429 is the first incident of occurrence.
>
> On Tue, Apr 14, 2020 at 8:27 PM Evgenii Zhuravlev
>  wrote:
> >
> > Can you provide full log files from all nodes? it's impossible to find
> the root cause from this.
> >
> > Evgenii
> >
> > вт, 14 апр. 2020 г. в 07:49, Rajan Ahlawat :
> >>
> >> server starts with following configuration:
> >>
> >> ignite_application-1-2020-03-17.log:14:[2020-03-17T08:23:33,664][INFO
> >> ][main][IgniteKernal%igniteStart] IgniteConfiguration
> >> [igniteInstanceName=igniteStart, pubPoolSize=32, svcPoolSize=32,
> >> callbackPoolSize=32, stripedPoolSize=32, sysPoolSize=30,
> >> mgmtPoolSize=4, igfsPoolSize=32, dataStreamerPoolSize=32,
> >> utilityCachePoolSize=32, utilityCacheKeepAliveTime=6,
> >> p2pPoolSize=2, qryPoolSize=32,
> >> igniteHome=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin,
> >>
> igniteWorkDir=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin/work,
> >> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e,
> >> nodeId=53396cb7-1b66-43da-bf10-ebb5f7cc9693,
> >> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@42b3b079,
> >> marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
> >> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
> >> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
> >> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0,
> >> marsh=null, reconCnt=100, reconDelay=1, maxAckTimeout=60,
> >> forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null],
> >> segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true,
> >> allResolversPassReq=true, segChkFreq=1,
> >> commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
> >> enableForcibleNodeKill=false, enableTroubleshootingLog=false,
> >>
> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@6692b6c6
> ,
> >> locAddr=null, locHost=null, locPort=47100, locPortRange=100,
> >> shmemPort=-1, directBuf=true, directSndBuf=false,
> >> idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60,
> >> reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=1024,
> >> slowClientQueueLimit=1000, nioSrvr=null, shmemSrv=null,
> >> usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true,
> >> filterReachableAddresses=false, ackSndThreshold=32,
> >> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null,
> >> boundTcpPort=-1, boundTcpShmemPort=-1, selectorsCnt=16,
> >> selectorSpins=0, addrRslvr=null,
> >> ctxInitLatch=java.util.concurrent.CountDownLatch@1cd629b3[Count = 1],
> >> stopping=false,
> >>
> metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@589da3f3
> ],
> >> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@39d76cb5,
> >> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
> >> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@1cb346ea
> ,
> >> addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
> >> txCfg=org.apache.ignite.configuration.TransactionConfiguration@4c012563
> ,
> >> cacheSanityCheckEnabled=true, discoStartupDelay=6,
> >> deployMode=SHARED, p2pMissedCacheSize=100, locHost=null,
> >> timeSrvPortBase=31100, timeSrvPortRange=100,
> >> failureDetectionTimeout=1, clientFailureDetectionTimeout=3,
> >> metricsLogFreq=6, hadoopCfg=null,
> >>
> connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@14a50707
> ,
> >> odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
> >> [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
> >> grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null,
> >> binaryCfg=null, memCfg=null, pstCfg=null,
> >> dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040,
> >> sysCacheMaxSize=104857600, pageSize=0, concLvl=25,
> >> dfltDataRegConf=DataRegionConfiguration [name=Default_Region,
> >> maxSize=20971520, initSize=15728640, swapPath=null,
> >> pageEvictionMode=RANDOM_2_LRU, evictionThreshold=0.9,
> >> emptyPagesPoolSize=100, metricsEnabled=false,
> >> metricsSubIntervalCount=5, metricsRateTimeInterval=6,
> >> persistenceEnabled=false, checkpointPageBufSize=0], storagePath=null,
> >> checkpointFreq=18, lockWaitTime=1, checkpointThreads=4,
> >> checkpointWriteOrder=SEQUENTIAL, walHistSize=20, walSegments=10,
> >> walSegmentSize=67108864, walPath=db/wal,
> >> 

Re: Transaction already completed errors

2020-04-15 Thread Evgenii Zhuravlev
Hi Courtney,

MVCC is not production ready yet, so, I wouldn't recommend using
TRANSACTIONAL_SNAPSHOT atomicity for now.

Best Regards,
Evgenii

ср, 15 апр. 2020 г. в 06:02, Courtney Robinson :

> We're upgrading to Ignite 2.8 and are starting to use SQL tables. In all
> previous work we've used the key value APIs directly.
>
> After getting everything working, we're regularly seeing "transaction
> already completed" errors when executing SELECT queries. A stack trace is
> included at the end.
> All tables are created with
> "template=partitioned,backups=2,data_region=hypi,affinity_key=instanceId,atomicity=TRANSACTIONAL_SNAPSHOT"
>
> I found https://issues.apache.org/jira/browse/IGNITE-10763 which
> suggested the problem was fixed in 2.8 and "is caused by leaked tx stored
> in ThreadLocal".
>
> Has anyone else encountered this issue and is there a fix?
> Just to be clear, we're definitely not performing any insert/update/merge
> operations, only selects when this error occurs.
>
> From that issue I linked to, assuming the problem is still a leaked
> ThreadLocal is there any workaround for this?
> We have a managed thread pool (you can see Pool.java in the trace), I've
> tried not to use it but still get the error because I guess it's now just
> defaulting to Spring Boot's request thread pool.
>
>
> 2020-04-13 19:56:31.548 INFO 9 --- [io-1-exec-2] io.hypi.arc.os.gql.
>> HypiGraphQLException : GraphQL error, path: null, source: null, msg: null
>> javax.cache.CacheException: Transaction is already completed. at org.
>> apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(
>> IgniteCacheProxyImpl.java:820) at org.apache.ignite.internal.processors.
>> cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:753) at org.
>> apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(
>> GatewayProtectedCacheProxy.java:424) at io.hypi.arc.os.ignite.IgniteRepo.
>> findInstanceCtx(IgniteRepo.java:134) at io.hypi.arc.os.handlers.
>> BaseHandler.evaluateQuery(BaseHandler.java:38) at io.hypi.arc.os.handlers
>> .HttpHandler.lambda$runQuery$0(HttpHandler.java:145) at io.hypi.arc.base.
>> Pool.apply(Pool.java:109) at io.hypi.arc.base.Pool.lambda$async$3(Pool.
>> java:93) at com.google.common.util.concurrent.TrustedListenableFutureTask
>> $TrustedFutureInterruptibleTask.runInterruptibly(
>> TrustedListenableFutureTask.java:125) at com.google.common.util.
>> concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google
>> .common.util.concurrent.TrustedListenableFutureTask.run(
>> TrustedListenableFutureTask.java:78) at java.base/java.util.concurrent.
>> Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util
>> .concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.
>> concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
>> ScheduledThreadPoolExecutor.java:304) at java.base/java.util.concurrent.
>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
>> java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by:
>> org.apache.ignite.transactions.TransactionAlreadyCompletedException:
>> Transaction is already completed. at org.apache.ignite.internal.util.
>> IgniteUtils$18.apply(IgniteUtils.java:991) at org.apache.ignite.internal.
>> util.IgniteUtils$18.apply(IgniteUtils.java:989) at org.apache.ignite.
>> internal.util.IgniteUtils.convertException(IgniteUtils.java:1062) at org.
>> apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSelect
>> (IgniteH2Indexing.java:1292) at org.apache.ignite.internal.processors.
>> query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1117) at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(
>> GridQueryProcessor.java:2406) at org.apache.ignite.internal.processors.
>> query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2402) at org.
>> apache.ignite.internal.util.lang.IgniteOutClosureX.apply(
>> IgniteOutClosureX.java:36) at org.apache.ignite.internal.processors.query
>> .GridQueryProcessor.executeQuery(GridQueryProcessor.java:2919) at org.
>> apache.ignite.internal.processors.query.GridQueryProcessor.
>> lambda$querySqlFields$1(GridQueryProcessor.java:2422) at org.apache.
>> ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(
>> GridQueryProcessor.java:2460) at org.apache.ignite.internal.processors.
>> query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2396) at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor.
>> querySqlFields(GridQueryProcessor.java:2323) at org.apache.ignite.
>> internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl
>> .java:805) ... 16 common frames omitted Caused by: org.apache.ignite.
>> internal.transactions.IgniteTxAlreadyCompletedCheckedException:
>> Transaction is already completed. at org.apache.ignite.internal.
>> 

Re: Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-15 Thread max904
Yes, of course I'm using "cache.withKeepBinary()". Below is my exact
reproducer:

final URL configUrl =
getClass().getClassLoader().getResource("example-ignite.xml");
final Ignite ignite = Ignition.start(configUrl);

final CacheConfiguration cacheConfig = new
CacheConfiguration<>("Employee");
cacheConfig.setIndexedTypes(EmployeeId.class, Employee.class);

IgniteCache cache =
ignite.getOrCreateCache(cacheConfig);
IgniteCache employeeCache =
cache.withKeepBinary();

try {
  BinaryObjectBuilder key1Builder =
ignite.binary().builder(EmployeeId.class.getName());
  key1Builder.setField("employeeNumber", 65348765, Integer.class);
  key1Builder.setField("departmentNumber", 123, Integer.class);
  BinaryObject key1 = key1Builder.build();
  BinaryObjectBuilder emp1Builder =
ignite.binary().builder(Employee.class.getName());
  emp1Builder.setField("firstName", "John", String.class);
  emp1Builder.setField("lastName", "Smith", String.class);
  emp1Builder.setField("id", key1);
  BinaryObject emp1 = emp1Builder.build();

  employeeCache.put(key1, emp1);
  BinaryObject emp2 = employeeCache.get(key1);
  assertThat(emp2).isNotNull();
  assertThat(emp2).isEqualTo(emp1);

  employeeCache.put(key1, emp1);

  BinaryObject key2 = emp1.field("id");
  employeeCache.put(key2, emp1); // CRASH!!! CorruptedTreeException: B+Tree
is corrupted

  //employeeCache.put(key2.clone(), emp1); // CRASH!!!
CorruptedTreeException: B+Tree is corrupted

  employeeCache.put(key2.toBuilder().build(), emp1); // OK!
} finally {
  Ignition.stop(true);
}



Where the data types are the following:

public interface EmployeeId {
  int getEmployeeNumber();
  void setEmployeeNumber(int employeeNumber);

  int getDepartmentNumber();
  void setDepartmentNumber(int departmentNumber);
}

public interface Employee {

  EmployeeId getId();
  void setId(EmployeeId id);

  String getFirstName();
  void setFirstName(String firstName);

  String getLastName();
  void setLastName(String lastName);

  Date getBirthDate();
  void setBirthDate(Date birthDate);

  ...
}





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Transaction already completed errors

2020-04-15 Thread Courtney Robinson
We're upgrading to Ignite 2.8 and are starting to use SQL tables. In all
previous work we've used the key value APIs directly.

After getting everything working, we're regularly seeing "transaction
already completed" errors when executing SELECT queries. A stack trace is
included at the end.
All tables are created with
"template=partitioned,backups=2,data_region=hypi,affinity_key=instanceId,atomicity=TRANSACTIONAL_SNAPSHOT"

I found https://issues.apache.org/jira/browse/IGNITE-10763 which suggested
the problem was fixed in 2.8 and "is caused by leaked tx stored in
ThreadLocal".

Has anyone else encountered this issue and is there a fix?
Just to be clear, we're definitely not performing any insert/update/merge
operations, only selects when this error occurs.

>From that issue I linked to, assuming the problem is still a leaked
ThreadLocal is there any workaround for this?
We have a managed thread pool (you can see Pool.java in the trace), I've
tried not to use it but still get the error because I guess it's now just
defaulting to Spring Boot's request thread pool.


2020-04-13 19:56:31.548 INFO 9 --- [io-1-exec-2] io.hypi.arc.os.gql.
> HypiGraphQLException : GraphQL error, path: null, source: null, msg: null
> javax.cache.CacheException: Transaction is already completed. at org.
> apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(
> IgniteCacheProxyImpl.java:820) at org.apache.ignite.internal.processors.
> cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:753) at org.
> apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(
> GatewayProtectedCacheProxy.java:424) at io.hypi.arc.os.ignite.IgniteRepo.
> findInstanceCtx(IgniteRepo.java:134) at io.hypi.arc.os.handlers.
> BaseHandler.evaluateQuery(BaseHandler.java:38) at io.hypi.arc.os.handlers.
> HttpHandler.lambda$runQuery$0(HttpHandler.java:145) at io.hypi.arc.base.
> Pool.apply(Pool.java:109) at io.hypi.arc.base.Pool.lambda$async$3(Pool.
> java:93) at com.google.common.util.concurrent.TrustedListenableFutureTask$
> TrustedFutureInterruptibleTask.runInterruptibly(
> TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent
> .InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.
> util.concurrent.TrustedListenableFutureTask.run(
> TrustedListenableFutureTask.java:78) at java.base/java.util.concurrent.
> Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.
> concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.
> concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
> ScheduledThreadPoolExecutor.java:304) at java.base/java.util.concurrent.
> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java
> :628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.
> apache.ignite.transactions.TransactionAlreadyCompletedException:
> Transaction is already completed. at org.apache.ignite.internal.util.
> IgniteUtils$18.apply(IgniteUtils.java:991) at org.apache.ignite.internal.
> util.IgniteUtils$18.apply(IgniteUtils.java:989) at org.apache.ignite.
> internal.util.IgniteUtils.convertException(IgniteUtils.java:1062) at org.
> apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSelect(
> IgniteH2Indexing.java:1292) at org.apache.ignite.internal.processors.query
> .h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1117) at org.
> apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(
> GridQueryProcessor.java:2406) at org.apache.ignite.internal.processors.
> query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2402) at org.
> apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX
> .java:36) at org.apache.ignite.internal.processors.query.
> GridQueryProcessor.executeQuery(GridQueryProcessor.java:2919) at org.
> apache.ignite.internal.processors.query.GridQueryProcessor.
> lambda$querySqlFields$1(GridQueryProcessor.java:2422) at org.apache.ignite
> .internal.processors.query.GridQueryProcessor.executeQuerySafe(
> GridQueryProcessor.java:2460) at org.apache.ignite.internal.processors.
> query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2396) at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.
> querySqlFields(GridQueryProcessor.java:2323) at org.apache.ignite.internal
> .processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:805
> ) ... 16 common frames omitted Caused by: org.apache.ignite.internal.
> transactions.IgniteTxAlreadyCompletedCheckedException: Transaction is
> already completed. at org.apache.ignite.internal.processors.cache.mvcc.
> MvccUtils.checkActive(MvccUtils.java:684) at org.apache.ignite.internal.
> processors.query.h2.IgniteH2Indexing.executeSelect(IgniteH2Indexing.java:
> 1255) ... 26 common frames omitted
>
Regards,
Courtney Robinson
Founder and CEO, Hypi
Tel: ++44 208 123 2413 (GMT+0) 


Re: JDBC Connection Pooling

2020-04-15 Thread narges saleh
Please note that in my case, the streamers are running on the server side
(as part of different services).

On Wed, Apr 15, 2020 at 6:46 AM narges saleh  wrote:

> So, in effect, I'll be having a pool of streamers, right?
> Would this still be the case if I am using BinaryObjectBuilder to build
> objects to stream the data to a few caches? Should I create a pool of data
> streamers (a few for each cache)?
> I don't want to have to create a new object builder and data streamer if I
> am inserting to the same cache over and over.
>
> On Tue, Apr 14, 2020 at 11:56 AM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> For each connection, on node side will be created its own datastreamer. I
>> think it makes sense to try pooling for data load, but you will need to
>> measure everything, since the pool size depends on the lot of things
>>
>> вт, 14 апр. 2020 г. в 07:31, narges saleh :
>>
>>> Yes, Evgenii.
>>>
>>> On Mon, Apr 13, 2020 at 10:06 PM Evgenii Zhuravlev <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
 Hi,

 Do you use STREAMING MODE for thin JDBC driver?

 Evgenii

 пн, 13 апр. 2020 г. в 19:33, narges saleh :

> Thanks Alex. I will study the links you provided.
>
> I read somewhere that jdbc datasource is available via Ignite JDBC,
> (which should provide connection pooling).
>
> On Mon, Apr 13, 2020 at 12:31 PM akorensh 
> wrote:
>
>> Hi,
>>   At this point you need to implement connection pooling yourself.
>>   Use
>>
>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ClientConnectorConfiguration.html#setThreadPoolSize-int-
>>   to specify number of threads Ignite creates to service connection
>> requests.
>>
>>   Each new connection will be handled by a separate thread inside
>> Ignite(maxed out a threadPoolSize - as described above)
>>
>>   ClientConnectorConfiguration is set inside IgniteConfiguration:
>>
>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setClientConnectorConfiguration-org.apache.ignite.configuration.ClientConnectorConfiguration-
>>
>>   More info:
>>
>> https://www.gridgain.com/docs/latest/developers-guide/SQL/JDBC/jdbc-driver#cluster-configuration
>>
>> Thanks, Alex
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: JDBC Connection Pooling

2020-04-15 Thread narges saleh
So, in effect, I'll be having a pool of streamers, right?
Would this still be the case if I am using BinaryObjectBuilder to build
objects to stream the data to a few caches? Should I create a pool of data
streamers (a few for each cache)?
I don't want to have to create a new object builder and data streamer if I
am inserting to the same cache over and over.

On Tue, Apr 14, 2020 at 11:56 AM Evgenii Zhuravlev 
wrote:

> For each connection, on node side will be created its own datastreamer. I
> think it makes sense to try pooling for data load, but you will need to
> measure everything, since the pool size depends on the lot of things
>
> вт, 14 апр. 2020 г. в 07:31, narges saleh :
>
>> Yes, Evgenii.
>>
>> On Mon, Apr 13, 2020 at 10:06 PM Evgenii Zhuravlev <
>> e.zhuravlev...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Do you use STREAMING MODE for thin JDBC driver?
>>>
>>> Evgenii
>>>
>>> пн, 13 апр. 2020 г. в 19:33, narges saleh :
>>>
 Thanks Alex. I will study the links you provided.

 I read somewhere that jdbc datasource is available via Ignite JDBC,
 (which should provide connection pooling).

 On Mon, Apr 13, 2020 at 12:31 PM akorensh 
 wrote:

> Hi,
>   At this point you need to implement connection pooling yourself.
>   Use
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ClientConnectorConfiguration.html#setThreadPoolSize-int-
>   to specify number of threads Ignite creates to service connection
> requests.
>
>   Each new connection will be handled by a separate thread inside
> Ignite(maxed out a threadPoolSize - as described above)
>
>   ClientConnectorConfiguration is set inside IgniteConfiguration:
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setClientConnectorConfiguration-org.apache.ignite.configuration.ClientConnectorConfiguration-
>
>   More info:
>
> https://www.gridgain.com/docs/latest/developers-guide/SQL/JDBC/jdbc-driver#cluster-configuration
>
> Thanks, Alex
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



Re: Versions of windows supported and meaning of log error message - This operating system has been tested less rigorously

2020-04-15 Thread Rohan Kurian
Thanks a lot Pavel.

On Mon, Apr 13, 2020 at 5:54 PM Pavel Tupitsyn  wrote:

> Hi Rohan,
>
> I'm removing it right now. You can expect it in the next non-patch release.
>
> Thanks,
> Pavel
>
> On Tue, Apr 7, 2020 at 7:42 AM Rohan Kurian 
> wrote:
>
>> Hi Pavel,
>> Thanks for your reply. It does make things clear for us.
>>
>> One request, please accomodate if possible. You mentioned that this
>> message "This operating system has been tested less rigorously" is old and
>> misleading.
>>
>> Can this be removed in a future release? Also we would like to know when
>> it is removed.
>>
>> Thanks,
>> Rohan
>>
>> On Tue, Mar 24, 2020 at 10:36 PM Pavel Tupitsyn 
>> wrote:
>>
>>> This message is old and misleading, sorry.
>>>
>>> - Yes, we test on Windows as well as on Linux
>>> - Windows 10 is actually the most tested Windows version, I believe -
>>> all Windows TeamCity agents are at version 10
>>>
>>> > Can we go ahead with using ignite on windows?
>>> Yes!
>>>
>>>
>>> On Tue, Mar 24, 2020 at 8:00 PM rohankur  wrote:
>>>
 We are introducing ignite in a product and expect it to used by
 customers who
 will have Windows systems primarily.
 I see in this link
 https://apacheignite.readme.io/docs/getting-started
 the following list of OS
 Linux (any flavor),
 Mac OSX (10.6 and up)
 Windows (XP and up),
 Windows Server (2008 and up)
 Oracle Solaris

 However, when I start ignite on windows servers I see the following log
 message
 *This operating system has been tested less rigorously: Windows 10 10.0
 amd64*. Our team will appreciate the feedback if you experience any
 problems
 running ignite in this environment.

 WARN  org.apache.ignite.internal.GridDiagnostic - *This operating
 system has
 been tested less rigorously: Windows Server 2008 6.0 amd64.*

 Is there a subset of windows OS versions that have not been tested
 rigorously? Broadly speaking - What is the difference between rigorous
 and
 less rigorous testing?
 Can we go ahead with using ignite on windows?

 Thanks,
 Rohan





 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>


Re: org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Failed to reconnect to cluster (will retry): class o.a.i.IgniteCheckedException: Failed to deserialize object with given class loader: org.spr

2020-04-15 Thread Rajan Ahlawat
Shared file with email-id:
e.zhuravlev...@gmail.com

We have single instance of ignite, File contains all log of date Mar
30, 2019. Line 6429 is the first incident of occurrence.

On Tue, Apr 14, 2020 at 8:27 PM Evgenii Zhuravlev
 wrote:
>
> Can you provide full log files from all nodes? it's impossible to find the 
> root cause from this.
>
> Evgenii
>
> вт, 14 апр. 2020 г. в 07:49, Rajan Ahlawat :
>>
>> server starts with following configuration:
>>
>> ignite_application-1-2020-03-17.log:14:[2020-03-17T08:23:33,664][INFO
>> ][main][IgniteKernal%igniteStart] IgniteConfiguration
>> [igniteInstanceName=igniteStart, pubPoolSize=32, svcPoolSize=32,
>> callbackPoolSize=32, stripedPoolSize=32, sysPoolSize=30,
>> mgmtPoolSize=4, igfsPoolSize=32, dataStreamerPoolSize=32,
>> utilityCachePoolSize=32, utilityCacheKeepAliveTime=6,
>> p2pPoolSize=2, qryPoolSize=32,
>> igniteHome=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin,
>> igniteWorkDir=/home/patrochandan01/ignite/apache-ignite-fabric-2.6.0-bin/work,
>> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e,
>> nodeId=53396cb7-1b66-43da-bf10-ebb5f7cc9693,
>> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@42b3b079,
>> marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
>> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
>> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
>> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0,
>> marsh=null, reconCnt=100, reconDelay=1, maxAckTimeout=60,
>> forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null],
>> segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true,
>> allResolversPassReq=true, segChkFreq=1,
>> commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
>> enableForcibleNodeKill=false, enableTroubleshootingLog=false,
>> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@6692b6c6,
>> locAddr=null, locHost=null, locPort=47100, locPortRange=100,
>> shmemPort=-1, directBuf=true, directSndBuf=false,
>> idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60,
>> reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=1024,
>> slowClientQueueLimit=1000, nioSrvr=null, shmemSrv=null,
>> usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true,
>> filterReachableAddresses=false, ackSndThreshold=32,
>> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null,
>> boundTcpPort=-1, boundTcpShmemPort=-1, selectorsCnt=16,
>> selectorSpins=0, addrRslvr=null,
>> ctxInitLatch=java.util.concurrent.CountDownLatch@1cd629b3[Count = 1],
>> stopping=false,
>> metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@589da3f3],
>> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@39d76cb5,
>> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
>> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@1cb346ea,
>> addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
>> txCfg=org.apache.ignite.configuration.TransactionConfiguration@4c012563,
>> cacheSanityCheckEnabled=true, discoStartupDelay=6,
>> deployMode=SHARED, p2pMissedCacheSize=100, locHost=null,
>> timeSrvPortBase=31100, timeSrvPortRange=100,
>> failureDetectionTimeout=1, clientFailureDetectionTimeout=3,
>> metricsLogFreq=6, hadoopCfg=null,
>> connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@14a50707,
>> odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
>> [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
>> grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null,
>> binaryCfg=null, memCfg=null, pstCfg=null,
>> dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040,
>> sysCacheMaxSize=104857600, pageSize=0, concLvl=25,
>> dfltDataRegConf=DataRegionConfiguration [name=Default_Region,
>> maxSize=20971520, initSize=15728640, swapPath=null,
>> pageEvictionMode=RANDOM_2_LRU, evictionThreshold=0.9,
>> emptyPagesPoolSize=100, metricsEnabled=false,
>> metricsSubIntervalCount=5, metricsRateTimeInterval=6,
>> persistenceEnabled=false, checkpointPageBufSize=0], storagePath=null,
>> checkpointFreq=18, lockWaitTime=1, checkpointThreads=4,
>> checkpointWriteOrder=SEQUENTIAL, walHistSize=20, walSegments=10,
>> walSegmentSize=67108864, walPath=db/wal,
>> walArchivePath=db/wal/archive, metricsEnabled=false, walMode=LOG_ONLY,
>> walTlbSize=131072, walBuffSize=0, walFlushFreq=2000,
>> walFsyncDelay=1000, walRecordIterBuffSize=67108864,
>> alwaysWriteFullPages=false,
>> fileIOFactory=org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory@4bd31064,
>> metricsSubIntervalCnt=5, metricsRateTimeInterval=6,
>> walAutoArchiveAfterInactivity=-1, writeThrottlingEnabled=false,
>> walCompactionEnabled=false], activeOnStart=true, autoActivation=true,
>> longQryWarnTimeout=3000, sqlConnCfg=null,
>> cliConnCfg=ClientConnectorConfiguration [host=null, port=10800,
>> 

Oracle BLOB, CLOB data type mapping with Ignite cache

2020-04-15 Thread Harshvardhan Kadam
Hello,

I am using Ignite 2.8.0 as cache layer with Oracle 11g as 3rd party
persistence.

I have couple of questions:
1) Does Ignite supports Oracle's BLOB data type? If yes, what should be the
attribute type in generate POJO (Object or something else)?
2) As per Ignite official documentation, it supports Binary data type. Which
data type of Oracle matches with Binary of Ignite? And again what should be
attribute type in generate POJO?
3) Does Ignite supports Oracle's CLOB data type? If yes, what should be the
attribute type in generate POJO (String or something else)?
4) What does TYPE_NAME 'Other' signifies in cache description obtained using
sqlline '!describe' command?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/