Re: Random2LruPageEvictionTracker causing hanging in our integration tests

2020-04-30 Thread Ilya Kasnacheev
Hello!

In your stack trace I don't see a single 'org.apache.ignite' line. Why do
you think Apache Ignite is to blame here?

Regards,
-- 
Ilya Kasnacheev


ср, 29 апр. 2020 г. в 22:26, scottmf :

> Hi Anton, Just to be clear, the stack trace is from a thread dump that I
> took while the process was hanging indefinitely.
>
> Although I can reproduce this easily in my service, I can't share the code
> with you. I'll attempt to get a generic use case to hang in this manner and
> post it to github.
>
> The full stack is below.
>
> No use-case in this particular scenario for eviction. Like I said, it is
> just for integration testing. My only concern is to ensure that there is no
> bug that would hit us in production.
>
> It is eviction for an in-memory cluster, no persistence, on-heap or near
> cache.
>
> "Test worker" #22 prio=5 os_prio=31 cpu=299703.41ms elapsed=317.18s 
> tid=0x7ff3cfc8c800 nid=0x7203 runnable  [0x75b38000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.ignite.internal.processors.cache.persistence.evict.Random2LruPageEvictionTracker.evictDataPage(Random2LruPageEvictionTracker.java:152)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpace(IgniteCacheDatabaseSharedManager.java:1086)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.ensureFreeSpace(GridCacheMapEntry.java:4513)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerSet(GridCacheMapEntry.java:1461)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:745)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3850)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:440)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:390)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4129)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4118)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:4118)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commit(GridNearTxLocal.java:4086)
>   at 
> org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor$4.applyx(DataStructuresProcessor.java:587)
>   at 
> org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor$4.applyx(DataStructuresProcessor.java:556)
>   at 
> org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.retryTopologySafe(DataStructuresProcessor.java:1664)
>   at 
> org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.getAtomic(DataStructuresProcessor.java:556)
>   at 
> org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.reentrantLock(DataStructuresProcessor.java:1361)
>   at 
> org.apache.ignite.internal.IgniteKernal.reentrantLock(IgniteKernal.java:4136)
>   at jdk.internal.reflect.GeneratedMethodAccessor713.invoke(Unknown 
> Source)
>   at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@11.0.5/DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(java.base@11.0.5/Method.java:566)
>   at 
> com.example.symphony.cmf.ignite.IgniteInitializer$1.invoke(IgniteInitializer.java:158)
>   at com.sun.proxy.$Proxy205.reentrantLock(Unknown Source)
>   at 
> com.example.data.store.jdbc.cache.CacheService.getCount(CacheService.java:47)
>   at 
> com.example.data.store.jdbc.cache.CacheService$$FastClassBySpringCGLIB$$7efa9131.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
>

Re: Backups not being done for SQL caches

2020-04-30 Thread Ilya Kasnacheev
Hello!

Do you have persistence? If so, are you sure that all 3 of your nodes are
in baseline topology?

Regards,
-- 
Ilya Kasnacheev


чт, 30 апр. 2020 г. в 16:09, Courtney Robinson :

> We're continuing migration from using the Java API to purley SQL and have
> encountered a situation on our development cluster where even though ALL
> tables are created with backups=2, as in
>
> template=partitioned,backups=2,affinity_key=instanceId,atomicity=ATOMIC,cache_name=  name here>
>
> In the logs, with 3 nodes in this test environment we have:
>
> 2020-04-29 22:55:50.083 INFO 9 --- [orker-#40%hypi%]
>> o.apache.ignite.internal.exchange.time : Started exchange init
>> [topVer=AffinityTopologyVersion [topVer=27, minorTopVer=1], crd=true,
>> evt=DISCOVERY_CUSTOM_EVT, evtNode=e0b6889f-219b-4686-ab52-725bfe7848b2,
>> customEvt=DynamicCacheChangeBatch
>> [id=a81a0e7c171-3f0fbbc0-b996-448c-98f7-119d7e485f04, reqs=ArrayList
>> [DynamicCacheChangeRequest [cacheName=hypi_whatsapp_Item, hasCfg=true,
>> nodeId=e0b6889f-219b-4686-ab52-725bfe7848b2, clientStartOnly=false,
>> stop=false, destroy=false, disabledAfterStartfalse]],
>> exchangeActions=ExchangeActions [startCaches=[hypi_whatsapp_Item],
>> stopCaches=null, startGrps=[hypi_whatsapp_Item], stopGrps=[],
>> resetParts=null, stateChangeRequest=null], startCaches=false],
>> allowMerge=false, exchangeFreeSwitch=false]
>> 2020-04-29 22:55:50.280 INFO 9 --- [orker-#40%hypi%]
>> o.a.i.i.p.cache.GridCacheProcessor : Started cache
>> [name=hypi_whatsapp_Item, id=1391701259, dataRegionName=hypi,
>> mode=PARTITIONED, atomicity=ATOMIC, backups=2, mvcc=false]
>> 2020-04-29 22:55:50.289 INFO 9 --- [ sys-#648%hypi%]
>> o.a.i.i.p.a.GridAffinityAssignmentCache : Local node affinity assignment
>> distribution is not ideal [cache=hypi_whatsapp_Item,
>> expectedPrimary=1024.00, actualPrimary=0, expectedBackups=2048.00,
>> actualBackups=0, warningThreshold=50.00%]
>> 2020-04-29 22:55:50.293 INFO 9 --- [orker-#40%hypi%]
>> .c.d.d.p.GridDhtPartitionsExchangeFuture : Finished waiting for partition
>> release future [topVer=AffinityTopologyVersion [topVer=27, minorTopVer=1],
>> waitTime=0ms, futInfo=NA, mode=DISTRIBUTED]
>> 2020-04-29 22:55:50.330 INFO 9 --- [orker-#40%hypi%]
>> .c.d.d.p.GridDhtPartitionsExchangeFuture : Finished waiting for partitions
>> release latch: ServerLatch [permits=0, pendingAcks=HashSet [],
>> super=CompletableLatch [id=CompletableLatchUid [id=exchange,
>> topVer=AffinityTopologyVersion [topVer=27, minorTopVer=1
>
>
> You can see the line
>
> Local node affinity assignment distribution is not ideal
>
>
> but it's clear they the backup = 2 is there. To verify, I stoped 2 of the
> three nodes and sure enough I get the exception
>
> Failed to find data nodes for cache: InstanceMapping
>
>
> Is there some additional configuration needed for partitioned SQL caches
> to have the backups as configured?
> Until now we used the Java API with put/get and didn't have an issue with
> backups.
>
> Full exception below:
>
> org.apache.ignite.cache.CacheServerNotFoundException: Failed to find data 
> nodes for cache: InstanceMapping
>>at 
>> org.apache.ignite.internal.processors.query.h2.twostep.ReducePartitionMapper.stableDataNodes(ReducePartitionMapper.java:197)
>>at 
>> org.apache.ignite.internal.processors.query.h2.twostep.ReducePartitionMapper.nodesForPartitions(ReducePartitionMapper.java:119)
>>at 
>> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:466)
>>at 
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$7.iterator(IgniteH2Indexing.java:1687)
>>at 
>> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iter(QueryCursorImpl.java:106)
>>at 
>> org.apache.ignite.internal.processors.cache.query.RegisteredQueryCursor.iter(RegisteredQueryCursor.java:66)
>>at 
>> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:96)
>>at io.hypi.arc.os.ignite.IgniteRepo.findInstanceCtx(IgniteRepo.java:140)
>>at io.hypi.arc.os.handlers.BaseHandler.evaluateQuery(BaseHandler.java:70)
>>at io.hypi.arc.os.handlers.HttpHandler.runQuery(HttpHandler.java:141)
>>at io.hypi.arc.os.handlers.HttpHandler.graphql(HttpHandler.java:135)
>>at jdk.internal.reflect.GeneratedMethodAccessor106.invoke(Unknown Source)
>>at 
>> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>at java.base/java.lang.reflect.Method.invo

Re: Event Listners when we use DataStreamer

2020-04-30 Thread Ilya Kasnacheev
Hello!

Does this change if you set allowOverride(true) on Data Streamer?

Regards,
-- 
Ilya Kasnacheev


чт, 30 апр. 2020 г. в 16:29, krkumar24061...@gmail.com <
krkumar24061...@gmail.com>:

> Hi Guys - When I am adding entries into cache through DataStreamer.addData,
> does it invoke the event listeners configured for EVT_CACHE_ENTRY_CREATED,
> EventType.EVT_CACHE_OBJECT_PUT??
>
> I am configuring a local listener in the following way
>
> engine.events().localListen(cacheChangeHandler,
> EventType.EVT_CACHE_ENTRY_CREATED,
>
> EventType.EVT_CACHE_ENTRY_DESTROYED, EventType.EVT_CACHE_OBJECT_PUT,
> EventType.EVT_CACHE_OBJECT_READ,
> EventType.EVT_CACHE_OBJECT_REMOVED,
>
> EventType.EVT_CACHE_OBJECT_EXPIRED);
>
> and in the configuration xml, I have the following:
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_ENTRY_CREATED"/>
> 
> static-field="org.apache.ignite.events.EventType.EVT_CACHE_ENTRY_DESTROYED"/>
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ"/>
> 
> static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/>
> 
> static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_EXPIRED"/>
>
>
> Thanx and Regards,
> KR Kumar
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: EPOCH seconds in future!

2020-04-28 Thread Ilya Kasnacheev
Hello!

Yes, you are right, according to
https://issues.apache.org/jira/browse/IGNITE-11472 TIMESTAMP WITH TIME ZONE
not supported by Ignite.

I think you will have to work around this quirky behavior. I have left a
comment about this issue specifically.

Regards,
-- 
Ilya Kasnacheev


пн, 27 апр. 2020 г. в 15:15, dbutkovic :

> Hi Ilya,
>
> thnx for replay, I think that Ignite CURRENT_TIMESTAMP() is NOT TIMESTAMP
> WITH TIME ZONE.
>
> Below it test with inserting CURRENT_TIMESTAMP in two tables, one with data
> type TIMESTAMP and second with TIMESTAMP WITH TIME ZONE.
>
> First of all,
> in Ignite documentation I can't find that CURRENT_TIMESTAMP() returns
> TIMESTAMP WITH TIME ZONE.
>
> In H2 documentation we can see that H2 have two data types:
> TIMESTAMP and TIMESTAMP WITH TIME ZONE
> http://www.h2database.com/html/datatypes.html#timestamp_with_time_zone_type
>
> In H2 CURRENT_TIMESTAMP returns the current timestamp with time zone.
>
> I think that in Ignite CURRENT_TIMESTAMP is ONLY CURRENT_TIMESTAMP without
> TIME ZONE.
>
> https://apacheignite-sql.readme.io/docs/current_timestamp
>
>
> 0: jdbc:ignite:thin://192.168.50.95/> SELECT CURRENT_TIMESTAMP();
> ++
> |  CURRENT_TIMESTAMP()   |
> ++
> | 2020-04-27 13:59:39.814|
> ++
>
> In first table insert of CURRENT_TIMESTAMP() in data type timestamp is OK.
>
> CREATE TABLE TEST1
> (
> id  varchar(10),
> time1   timestamp,
> PRIMARY KEY (id)
> ) WITH "CACHE_NAME=TEST1, DATA_REGION=PersistDataRegion,
> TEMPLATE=REPLICATED, BACKUPS=1";
>
> 0: jdbc:ignite:thin://192.168.50.95/> INSERT INTO TEST1 (id, time1) values
> ('a', CURRENT_TIMESTAMP());
> 1 row affected (0.051 seconds)
>
>
> In second table insert of CURRENT_TIMESTAMP() in data type timestamp with
> time zone is NOT OK.
>
> CREATE TABLE TEST2
> (
> id  varchar(10),
> time1   timestamp with time zone,
> PRIMARY KEY (id)
> ) WITH "CACHE_NAME
>
>
> 0: jdbc:ignite:thin://192.168.50.95/> INSERT INTO TEST2 (id, time1) values
> ('a', CURRENT_TIMESTAMP());
> Error: class org.apache.ignite.IgniteException: Failed to execute SQL
> query.
> Hexadecimal string with odd number of characters: "2020-04-27
> 13:59:00.599";
> SQL statement:
> SELECT
> TABLE.ID,
> TABLE.TIME1
> FROM TABLE(ID VARCHAR(10)=('a',), TIME1 OTHER=(CURRENT_TIMESTAMP(),))
> [90003-197] (state=5,code=1)
> java.sql.SQLException: class org.apache.ignite.IgniteException: Failed to
> execute SQL query. Hexadecimal string with odd number of characters:
> "2020-04-27 13:59:00.599"; SQL statement:
> SELECT
> TABLE.ID,
> TABLE.TIME1
> FROM TABLE(ID VARCHAR(10)=('a',), TIME1 OTHER=(CURRENT_TIMESTAMP(),))
> [90003-197]
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:750)
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:212)
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:475)
> at sqlline.Commands.execute(Commands.java:823)
> at sqlline.Commands.sql(Commands.java:733)
> at sqlline.SqlLine.dispatch(SqlLine.java:795)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
>
>
> Please, do you know why I can't insert CURRENT_TIMESTAMP() in data type
> timestamp with time zone.
>
>
> Best regards
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Insertions slow on high load on Ignite 2.7.5

2020-04-27 Thread Ilya Kasnacheev
Hello!

Your understanding is correct.

Yes, if you have more RAM and larger data region, the process should slow
down less rapidly.

Regards,
-- 
Ilya Kasnacheev


пн, 27 апр. 2020 г. в 17:56, Shubham Agrawal :

> Hi Ilya,
>
> This data loading is not a one time activity for our scenario. So, I guess
> turning off the WAL would not be a solution since turning off and on every
> time is not feasible every time the data loading happens. Please let me
> know if my understanding is correct.
>
> I'll try to increase time between checkpoints and also increase checkpoint
> page buffer and let you know.
>
> In the meantime, just wanted to check if the Disk I/O and memory also has
> an effect? Currently, I am using 32 GB machine and increasing the memory
> will make any difference?
>
> Thanks & Regards,
> Shubham Agrawal
>
>
> On Mon, Apr 27, 2020 at 7:31 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> My recommendation is to increase time between checkpoints
>> (checkpointFrequency) and, maybe, also increase checkpoint page buffer.
>>
>> If you do data loading in bursts, it may make sense to turn off WAL why
>> you do data loading.
>>
>> You may still expect some data degradation as time goes on (what's your
>> total allocated after the last 2.5 million?)
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 27 апр. 2020 г. в 00:32, Shubham Agrawal <
>> agrawalshubham1...@gmail.com>:
>>
>>> Hi Team,
>>>
>>> I am using ignite 2.7.5.
>>>
>>> I am trying to insert 25 million records in the ignite persistent table.
>>>
>>> Observations:
>>> I am observing that initial 2.5 million records took 5 mins to insert,
>>> next 2.5 million records took 8 mins to insert, following 2.5 records took
>>> 12 mins to insert and it kept on increasing . The last 2.5 million records
>>> took over 40 minutes to insert. The CPU of the client and server is around
>>> 30%, so nothing on that front. Heap usage looks normal. I have tried
>>> inserting to the table with and without indexes but nothing seems to make a
>>> difference.
>>>
>>> Configurations:
>>> I am running a 3 node cluster, 16 Cores 32 GB machines. Ignite Version:
>>> 2.7.5
>>>
>>> Things I have tried:
>>> JDBI Implementation -
>>> 1. @SQLBatch does not work for ignite giving the below error
>>> org.skife.jdbi.v2.exceptions.TransactionException: Failed to start
>>> transaction at
>>> org.skife.jdbi.v2.tweak.transactions.LocalTransactionHandler.begin(LocalTransactionHandler.java:57)
>>> at org.skife.jdbi.v2.BasicHandle.begin(BasicHandle.java:159)
>>>
>>> 2. Tried ?streaming=true, but with no effect
>>>
>>> 3. Increasing threads to insert, but not much effect
>>>
>>> JDBC Implementation
>>> 1. Tried ?streaming=true, but with no effect
>>> 2. Tried batching, but the performance actually degraded
>>> 3. Tried ?streaming=true with and without batching but with no effect.
>>>
>>> Want to achieve a common pattern in insertions, like 5 mins or 8 mins
>>> constantly for 25 million insertions
>>>
>>> Please let me know your thoughts on the same. Your inputs could help me
>>> a lot. Thanks a lot.
>>>
>>> Regards,
>>> Shubham Agrawal
>>>
>>


Re: Insertions slow on high load on Ignite 2.7.5

2020-04-27 Thread Ilya Kasnacheev
Hello!

My recommendation is to increase time between checkpoints
(checkpointFrequency) and, maybe, also increase checkpoint page buffer.

If you do data loading in bursts, it may make sense to turn off WAL why you
do data loading.

You may still expect some data degradation as time goes on (what's your
total allocated after the last 2.5 million?)

Regards,
-- 
Ilya Kasnacheev


пн, 27 апр. 2020 г. в 00:32, Shubham Agrawal :

> Hi Team,
>
> I am using ignite 2.7.5.
>
> I am trying to insert 25 million records in the ignite persistent table.
>
> Observations:
> I am observing that initial 2.5 million records took 5 mins to insert,
> next 2.5 million records took 8 mins to insert, following 2.5 records took
> 12 mins to insert and it kept on increasing . The last 2.5 million records
> took over 40 minutes to insert. The CPU of the client and server is around
> 30%, so nothing on that front. Heap usage looks normal. I have tried
> inserting to the table with and without indexes but nothing seems to make a
> difference.
>
> Configurations:
> I am running a 3 node cluster, 16 Cores 32 GB machines. Ignite Version:
> 2.7.5
>
> Things I have tried:
> JDBI Implementation -
> 1. @SQLBatch does not work for ignite giving the below error
> org.skife.jdbi.v2.exceptions.TransactionException: Failed to start
> transaction at
> org.skife.jdbi.v2.tweak.transactions.LocalTransactionHandler.begin(LocalTransactionHandler.java:57)
> at org.skife.jdbi.v2.BasicHandle.begin(BasicHandle.java:159)
>
> 2. Tried ?streaming=true, but with no effect
>
> 3. Increasing threads to insert, but not much effect
>
> JDBC Implementation
> 1. Tried ?streaming=true, but with no effect
> 2. Tried batching, but the performance actually degraded
> 3. Tried ?streaming=true with and without batching but with no effect.
>
> Want to achieve a common pattern in insertions, like 5 mins or 8 mins
> constantly for 25 million insertions
>
> Please let me know your thoughts on the same. Your inputs could help me a
> lot. Thanks a lot.
>
> Regards,
> Shubham Agrawal
>


Re: EPOCH seconds in future!

2020-04-27 Thread Ilya Kasnacheev
Hello!

In Ignite, CURRENT_TIMESTAMP() is TIMESTAMP WITH TIME ZONE.

Regards,
-- 
Ilya Kasnacheev


вс, 26 апр. 2020 г. в 14:51, dbutkovic :

> Hi Ilya,
>
> thanks a lot for the reply,
> it is surprising  that EPOCH is not always the same regardless of the
> timezone.
> I did a test on two Ignite instances, one on a host with timezone 'Europe /
> Zagreb' and the other on UTC.
> The EPOCH obtained in bash is the same on both hosts and the EPOCH obtained
> in Ignite sql is not the same.
>
> When I select CURRENT_TIMESTAMP() Ignite does not return timezone
> information, how do distinct if returned timestamp is local time or UTC?
>
> 0: jdbc:ignite:thin://192.168.50.95/> SELECT CURRENT_TIMESTAMP(),
> FORMATDATETIME( CURRENT_TIMESTAMP(), '-MM-dd HH:mm:ss z', 'en',
> 'Europe/Zagreb');
>
> +++
> |  CURRENT_TIMESTAMP()   | FORMATDATETIME(CURRENT_TIMESTAMP(),
> '-MM-dd HH:mm:ss z', 'en', 'Europe/Zag |
>
> +++
> | 2020-04-26 13:49:36.413| 2020-04-26 13:49:36 CEST
>
> |
>
> +++
> 1 row selected (0.002 seconds)
>
>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2557/epoch_Ignite.png>
>
>
> Best regards
> Dren
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-24 Thread Ilya Kasnacheev
Hello!

I have added comment to this ticket describing why does it happen and how
to fix it.

In Ignite, when you get key from your value binary object, it's actually a
wrapper pointing at some position in its parent (value) binary object. When
you try to put it to cache, indexing cannot process it correctly.

We should add code which tries to detach such objects, and throws some
exception when it cannot be detached (such as, if it references another
object inside parent)

Regards,
-- 
Ilya Kasnacheev


пт, 17 апр. 2020 г. в 20:52, akorensh :

> Maxim,
>   I've an appropriate ticket:
> https://issues.apache.org/jira/browse/IGNITE-12911
> Thanks, Alex
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Unable to initiate IgniteContext in spark-shell

2020-04-24 Thread Ilya Kasnacheev
Hello!


Caused by: java.io.InvalidClassException:
javax.cache.configuration.MutableConfiguration; local class
incompatible: stream classdesc serialVersionUID = 201405, local class
serialVersionUID = 201306200821

I think this is caused by different versions of jcache dependency on
your nodes. One of these likely has 1.0 while the other 1.1. Make sure
it matches.

Regards,

-- 
Ilya Kasnacheev


вт, 21 апр. 2020 г. в 12:00, ameyakulkarni00 :

> Hi
> I am trying to do a POC with apache ignite and spark for improving our
> spark
> application performance.
> I have a 10 node Dev cluster ( centos 7, HDP 3.1, spark 2.3.2 ).
> I have installed apache ignite (2.8.0) on 5 of those servers. The
> installation was smooth and all 5 nodes were live with default
> configuration
> and were identified by each other.
>
> I am simply trying to test apache ignite using spark-shell from one of the
> nodes where ignite is installed.
> I am getting the below error on executing :
>
> val ic = new IgniteContext(sc,()=> new IgniteConfiguration())
>
> sparkShellError.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2841/sparkShellError.txt>
>
>
> And the below error in the ignite process:
> igniteProcessError.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2841/igniteProcessError.txt>
>
>
> Kindly help me in fixing this. Or kindly point me to the right direction.
>
> Regards
> Ameya
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How can I delete entries from a table using pyignite?

2020-04-24 Thread Ilya Kasnacheev
Hello!

If there are any images, we never got to see these. Can you link them on
some external resource?

Thanks,
-- 
Ilya Kasnacheev


пн, 20 апр. 2020 г. в 18:25, Jueverhard :

> I have a Apache Ignite database running which I want to interact with using
> Python thin client (pyignite). I've already performed create, read and
> update operations, but I meet problems with the delete ones. For now, even
> if the submission of delete request does not raise any error, the entries
> that are supposed to be deleted are not.
>
> I've tried deleting those same entries running the same delete query in
> terminal via  and this does succesfully remove the targeted entries.
>
> Here is how I unsuccesfully tried to delete data :
>
>
> Any help would be greatly appreciated, thanks !
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to clear table data quickly?

2020-04-24 Thread Ilya Kasnacheev
Hello!

You can do cache.clear() or you can drop and create table (or cache).

I'm not sure what will be faster. It depends, since the latter will cause a
Partition Map Exchange, while former will be executed entirely in
background.

Regards,
-- 
Ilya Kasnacheev


вт, 21 апр. 2020 г. в 05:39, 18624049226 <18624049...@163.com>:

> Hi community,
>
> For some tables with a large amount of data, how to quickly clear the
> data in this table?
>
>


Re: EPOCH seconds in future!

2020-04-24 Thread Ilya Kasnacheev
Hello again!

I guess that EPOCH is calculated relative to current time zone, not UTC.
This means you will have to adjust it:
~/Downloads/apache-ignite-2.8.0-bin% date
Пт апр 24 18:16:55 MSK 2020
~/Downloads/apache-ignite-2.8.0-bin% bin/sqlline.sh
sqlline version 1.3.0
sqlline> !connect jdbc:ignite:thin://localhost
0: jdbc:ignite:thin://localhost> select EXTRACT (EPOCH from
CURRENT_TIMESTAMP(3));
+--+
| EXTRACT(EPOCH FROM CURRENT_TIMESTAMP(3)) |
+--+
| 1587752229.496   |
+--+
1 row selected (0,041 seconds)
0: jdbc:ignite:thin://localhost> select dateadd('s', 1587752229,
'1970-01-01'); -- Please note that's LOCAL TIME ZONE 1970-01-01, not UTC
+-+
| TIMESTAMP '2020-04-24 18:17:09' |
+-+
| 2020-04-24 18:17:09.0   |
+-+
1 row selected (0,006 seconds)

As you can see, you can round-trip such EPOCH values.

Regards,
-- 
Ilya Kasnacheev


пт, 24 апр. 2020 г. в 18:10, Ilya Kasnacheev :

> Hello!
>
> I guess that EPOCH() returns
> --
> Ilya Kasnacheev
>
>
> пт, 17 апр. 2020 г. в 12:07, dbutkovic :
>
>> Hi,
>> Ignite function EXTRACT (EPOCH from CURRENT_TIMESTAMP(3)) return seconds
>> in
>> future!!!
>>
>>
>> Current date and time on UNIX host, I am in Zagreb/Croatia CEST  GMT+2
>>
>> [root@incumbossdev01 ~]# date
>> Fri Apr 17 10:51:10 CEST 2020
>>
>>
>>
>> Connected to: Apache Ignite (version 2.7.6#20190911-sha1:21f7ca41)
>> Driver: Apache Ignite Thin JDBC Driver (version
>> 2.7.6#20190911-sha1:21f7ca41)
>> Autocommit status: true
>> Transaction isolation: TRANSACTION_REPEATABLE_READ
>> sqlline version 1.3.0
>> 0: jdbc:ignite:thin://192.168.50.95/> select CURRENT_TIMESTAMP(3);
>> ++
>> |  CURRENT_TIMESTAMP(3)  |
>> ++
>> | 2020-04-17 10:51:17.43 |
>> ++
>> 1 row selected (0.032 seconds)
>>
>>
>> https://apacheignite-sql.readme.io/docs/extract
>>
>> 0: jdbc:ignite:thin://192.168.50.95/> select EXTRACT (EPOCH from
>> CURRENT_TIMESTAMP(3));
>> +--+
>> | EXTRACT(EPOCH FROM CURRENT_TIMESTAMP(3)) |
>> +--+
>> | 1587120685.619   |
>> +--+
>> 1 row selected (0.007 seconds)
>>
>>
>> Convert EPOCH to Timestamp using https://www.epochconverter.com/
>>
>> The current Unix epoch time is  1587113657
>>
>> Assuming that this timestamp is in seconds:
>> GMT: Friday, 17. April 2020 10:51:25.619
>> Your time zone: petak, 17. travanj 2020 12:51:25.619 GMT+02:00 DST
>> Relative: In 2 hours
>>
>>
>> Convert EPOCH to Timestamp using Postgres function
>>
>> postgres=# select to_timestamp(1587120685.619);
>> to_timestamp
>> 
>>  2020-04-17 12:51:25.619+02
>> (1 row)
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: EPOCH seconds in future!

2020-04-24 Thread Ilya Kasnacheev
Hello!

I guess that EPOCH() returns
-- 
Ilya Kasnacheev


пт, 17 апр. 2020 г. в 12:07, dbutkovic :

> Hi,
> Ignite function EXTRACT (EPOCH from CURRENT_TIMESTAMP(3)) return seconds in
> future!!!
>
>
> Current date and time on UNIX host, I am in Zagreb/Croatia CEST  GMT+2
>
> [root@incumbossdev01 ~]# date
> Fri Apr 17 10:51:10 CEST 2020
>
>
>
> Connected to: Apache Ignite (version 2.7.6#20190911-sha1:21f7ca41)
> Driver: Apache Ignite Thin JDBC Driver (version
> 2.7.6#20190911-sha1:21f7ca41)
> Autocommit status: true
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> sqlline version 1.3.0
> 0: jdbc:ignite:thin://192.168.50.95/> select CURRENT_TIMESTAMP(3);
> ++
> |  CURRENT_TIMESTAMP(3)  |
> ++
> | 2020-04-17 10:51:17.43 |
> ++
> 1 row selected (0.032 seconds)
>
>
> https://apacheignite-sql.readme.io/docs/extract
>
> 0: jdbc:ignite:thin://192.168.50.95/> select EXTRACT (EPOCH from
> CURRENT_TIMESTAMP(3));
> +--+
> | EXTRACT(EPOCH FROM CURRENT_TIMESTAMP(3)) |
> +--+
> | 1587120685.619   |
> +--+
> 1 row selected (0.007 seconds)
>
>
> Convert EPOCH to Timestamp using https://www.epochconverter.com/
>
> The current Unix epoch time is  1587113657
>
> Assuming that this timestamp is in seconds:
> GMT: Friday, 17. April 2020 10:51:25.619
> Your time zone: petak, 17. travanj 2020 12:51:25.619 GMT+02:00 DST
> Relative: In 2 hours
>
>
> Convert EPOCH to Timestamp using Postgres function
>
> postgres=# select to_timestamp(1587120685.619);
> to_timestamp
> 
>  2020-04-17 12:51:25.619+02
> (1 row)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: EPOCH to timestamp

2020-04-24 Thread Ilya Kasnacheev
Hello!

I came across this solution:
select dateadd('s', 1587750989, '1970-01-01 00:00:00 UTC');
https://stackoverflow.com/questions/31804762/how-to-convert-timestamp-to-seconds-in-h2

Regards,
-- 
Ilya Kasnacheev


пт, 17 апр. 2020 г. в 10:06, dbutkovic :

> Hi,
> is there a function with which timestamp can be obtained from epoch.
>
> example
>
> select EXTRACT (EPOCH from CURRENT_TIMESTAMP(3));
> +--+
> | EXTRACT(EPOCH FROM CURRENT_TIMESTAMP(3)) |
> +--+
> | 1587113983.052   |
> +--+
>
> How to convert from epoch seconds to timestamp ?
>
> Best regards
> Dren
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Unable to run several ContinuousQuery-es in parallel due to: Failed to unmarshal discovery data for component: CONTINUOUS_PROC

2020-04-24 Thread Ilya Kasnacheev
Hello!

I remember this question on Stack Overflow, and the situation was, your
callback was binding IgniteCache within its context.

It is recommended to make your CQ callbacks inner static classes (not
lambdas or anonymous inner classes).

Regards,
-- 
Ilya Kasnacheev


чт, 16 апр. 2020 г. в 13:29, AlexBor :

> //Fixed formatting
>
> Hi guys!
>
> I'm trying to play with ContinuousQuery in v2.8.0 with the following setup:
> one server node + couple of client nodes running in different JVMs on the
> same local machine. I'm trying to execute same ContinuousQuery on both
> clients in parallel, but for some reason first one is able to connect and
> execute, however last one is failing with below exception during
> Ignition.getOrStart():
>
> *SEVERE: Failed to unmarshal discovery data for component: CONTINUOUS_PROC
> class org.apache.ignite.IgniteCheckedException: Failed to deserialize
> object
> with given class loader: sun.misc.Launcher$AppClassLoader@18b4aac2*
>
> All subsequent clients are failing until ContinuousQuery cursor is closed.
> So it looks like I can't run query on more than once client at every
> moment.
>
>
> Full trace:
>
> апр 16, 2020 12:59:51 PM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Failed to unmarshal discovery data for component: CONTINUOUS_PROC
> class org.apache.ignite.IgniteCheckedException: Failed to deserialize
> object
> with given class loader: sun.misc.Launcher$AppClassLoader@18b4aac2
> at
>
> org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:132)
> at
>
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:93)
> at
>
> org.apache.ignite.internal.util.IgniteUtils.unmarshalZip(IgniteUtils.java:10248)
> at
>
> org.apache.ignite.spi.discovery.tcp.internal.DiscoveryDataPacket.unmarshalData(DiscoveryDataPacket.java:340)
> at
>
> org.apache.ignite.spi.discovery.tcp.internal.DiscoveryDataPacket.unmarshalGridData(DiscoveryDataPacket.java:155)
> at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:2069)
> at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.processNodeAddFinishedMessage(ClientImpl.java:2219)
> at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.processDiscoveryMessage(ClientImpl.java:2088)
> at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1930)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
> org.apache.ignite.spi.discovery.tcp.ClientImpl$1.body(ClientImpl.java:302)
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:61)
> Caused by: java.io.InvalidObjectException: Failed to find cache for name:
> MY_CACHE
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheContext.readResolve(GridCacheContext.java:2376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1248)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2076)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.readExternal(IgniteCacheProxyImpl.java:2192)
> at
> java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2116)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2065)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
> at
>
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.readExternal(GatewayProtectedCacheProxy.java:1706)
> at
> java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2116)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2065)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStrea

Re: Regarding EVT_NODE_SEGMENTED event

2020-04-24 Thread Ilya Kasnacheev
Hello!

You can probably set clientReconnectDisabled to 'true' to generate this
event on client.

Regards,
-- 
Ilya Kasnacheev


вт, 21 апр. 2020 г. в 13:35, VeenaMithare :

> Thanks Monal,
>
> What is the best way to generate a EVT_NODE_SEGMENTED event on the client
> side for testing the event handler ? ( I am able to generate this on server
> side. )
>
> regards,
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQL queries returning incorrect results during High Load on Ignite V2.7.6

2020-04-24 Thread Ilya Kasnacheev
Hello!

I've not heard of issues such as this one. It would help if you have a
reproducer (one which creates a lot of load and detects cases like these).

Regards,
-- 
Ilya Kasnacheev


вт, 21 апр. 2020 г. в 14:39, neerajarora100 :

>
> I have a table in which during the performance runs, there are inserts
> happening in the beginning when the job starts, during the insertion time
> there are also parallel operations(GET/UPDATE queries) happening on that
> table. The Get operation also updates a value in column marking that record
> as picked. However, the next get performed on the table would again return
> back the same record even when the record was marked in progress.
>
> P.S. --> both the operations are done by the same single thread existing in
> the system. Logs below for reference, record marked in progress at Line 1
> on
> **20:36:42,864**, however, it is returned back in the result set of query
> executed after **20:36:42,891** by the same thread.
> We also observed that during high load (usually during same scenario as
> mentioned above) some update operation (intermittent) were not happening on
> the table even when the update executed successfully (validated using the
> returned result and then doing a get just after that to check the updated
> value ) without throwing an exception.
>
>
> 13 Apr 2020 20:36:42,864 [SHT-4083-initial] FINEST  -
> AbstractCacheHelper.markContactInProgress:2321 -  Action state after mark
> in
> progresss contactId.ATTR=: 514409 for jobId : 4083 is actionState : 128
>
> 13 Apr 2020 20:36:42,891 [SHT-4083-initial] FINEST  -
> CacheAdvListMgmtHelper.getNextContactToProcess:347 - Query : select
> priority, contact_id, action_state, pim_contact_store_id, action_id
> , retry_session_id, attempt_type, zone_id, action_pos  from pim_4083 where
> handler_id = ? and attempt_type != ?  and next_attempt_after <= ? and
> action_state = ? and exclude_flag = ?  order
> by attempt_type desc, priority desc, next_attempt_after asc,contact_id
> asc
> limit 1
>
>
> This happens usually during the performance runs when there are parallel
> JOB's started which are working on Ignite. Can anyone suggest what can be
> done to avoid such a situation..?
>
> We have 2 ignite data nodes that are deployed as springBootService deployed
> in the cluster being accessed, by 3 client nodes with 6GB of RAM and
> peristence enabled.
> Ignite version -> 2.7.6, Cache configuration is as follows,
>
> IgniteConfiguration cfg = new IgniteConfiguration();
>CacheConfiguration cachecfg = new CacheConfiguration(CACHE_NAME);
>cachecfg.setRebalanceThrottle(100);
>cachecfg.setBackups(1);
>cachecfg.setCacheMode(CacheMode.REPLICATED);
>cachecfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>cachecfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
>
> cachecfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>// Defining and creating a new cache to be used by Ignite Spring
> Data
> repository.
>CacheConfiguration ccfg = new CacheConfiguration(CACHE_TEMPLATE);
>ccfg.setStatisticsEnabled(true);
>ccfg.setCacheMode(CacheMode.REPLICATED);
>ccfg.setBackups(1);
>DataStorageConfiguration dsCfg = new DataStorageConfiguration();
>
> dsCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
>dsCfg.setStoragePath(storagePath);
>dsCfg.setWalMode(WALMode.FSYNC);
>dsCfg.setWalPath(walStoragePath);
>dsCfg.setWalArchivePath(archiveWalStoragePath);
>dsCfg.setWriteThrottlingEnabled(true);
>cfg.setAuthenticationEnabled(true);
>dsCfg.getDefaultDataRegionConfiguration()
> .setInitialSize(Long.parseLong(cacheInitialMemSize) * 1024
> *
> 1024);
>
>
> dsCfg.getDefaultDataRegionConfiguration().setMaxSize(Long.parseLong(cacheMaxMemSize)
> * 1024 * 1024);
>cfg.setDataStorageConfiguration(dsCfg);
>
>cfg.setClientConnectorConfiguration(clientCfg);
>// Run the command to alter the default user credentials
>// ALTER USER "ignite" WITH PASSWORD 'new_passwd'
>cfg.setCacheConfiguration(cachecfg);
>cfg.setFailureDetectionTimeout(Long.parseLong(cacheFailureTimeout));
>ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
> ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>ccfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>ccfg.setRebalanceThrottle(100);
>int pool = cfg.getSystemThreadPoolSize();
>cfg.setRebalanceThreadPoolSize(2);
>

Re: Enabling default persistence on existing ignite cluster

2020-04-24 Thread Ilya Kasnacheev
Hello!

Have you tried supplying different group name in CollectionsConfiguration
when creating a Queue?

Regards,
-- 
Ilya Kasnacheev


ср, 22 апр. 2020 г. в 11:09, Sebastian Sindelar <
sebastian.sinde...@nexus-ips.de>:

> Hallo.
>
> Our application uses ignite to share data between different services. We
> habe a couple of caches und queues. Currently some of the caches are
> persisted using a second data region. This works fine. A new requiredment
> is to persist the items in the queues.
>
> Because queues always use the default data region I assumed if I enable
> persistence on that region the queue content should be persistet. But this
> works just for the caches and not the queues. The queues still lose its
> content if the cluster shuts down. The log shows that persistence is
> enabled on the default region.
>
> The thing is if I reset the cluster (deleting the ignite home folder) the
> queues get persistet fine. The problem is this means a lot of manual work
> when updating clients.
>
> I tried renaming the queues, but it didn't work.
>
>
>
> Kind regards
>
> *nexus / ag *
>
> *Sebastian Sindelar *
>
> Softwareentwicklung
>
> Tel.: +49 (0)561 942880
> Fax: +49 (0)561 9428877
> Support: +49 (0)561 9428899
>
> E-Mail: sebastian.sinde...@nexus-ips.de
>
> NEXUS / IPS GmbH
> Standort Kassel
> Mendelssohn-Bartholdy-Str. 17
> D-34134 Kassel
> www.nexus-ag.de | www.nexus-ips.de
>
>
> ---
> NEXUS / IPS GmbH, Irmastraße 1, D-78166 Donaueschingen
> Eingetragene Gesellschaft beim Amtsgericht Freiburg i.Br., HRB 602014
> Geschäftsführer: Stefan Born, Dirk Hübner
>
> ---
>
> Rechtlicher Hinweis:
> Diese E-Mail samt Anhängen könnte vertrauliche/rechtlich geschützte
> Informationen enthalten.
> Wenn Sie nicht der richtige Empfänger sind, informieren Sie bitte sofort
> den Absender und löschen
> diese E-Mail samt Anhängen. Das unerlaubte Kopieren oder Weitergeben
> dieser E-Mail ist nicht gestattet.
>
> Datenschutzhinweis:
> Bitte beachten Sie unsere Grundsätze der Datenverarbeitung in unserer
> Datenschutzerklärung:
> https://www.nexus-ibh.de/unternehmen/datenschutzerklaerung
>
> Legal notice:
> This message and any attachment(s) may contain confidential/privileged
> information. If you are not the
> intended recipient, please email or telephone the sender and delete this
> message and any attachment(s) from
> your system. Any unauthorised copying, disclosure or distribution of the
> material in this email is strictly forbidden.
>
> Privacy Policy:
> Please note our principles of data processing in our privacy statement:
> https://www.nexus-ibh.de/unternehmen/datenschutzerklaerung
>
>
>
>
>


Re: General error: "java.lang.ArrayIndexOutOfBoundsException: 32768"

2020-04-24 Thread Ilya Kasnacheev
Hello!

I think you need to figure out how to reproduce it if we are to fix it.

I have found one ticket somewhat similar to your issue:
https://issues.apache.org/jira/browse/IGNITE-10501

Regards,
-- 
Ilya Kasnacheev


пт, 24 апр. 2020 г. в 15:12, 张立鑫 :

> Full new node is not have work directory.
> I don't know reproduce this behavior, but i used `ContinuousQuery`
> `SqlQuery` `Sql update` and `put value` frequently on service at the same
> time.
> B+Tree is updated frequently?
>
> Ilya Kasnacheev  于2020年4月24日周五 下午7:57写道:
>
>> Hello!
>>
>> What do you mean by 'full new node'?
>>
>> Do you know steps to reproduce this behavior?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 22 апр. 2020 г. в 19:37, 张立鑫 :
>>
>>> Hello,
>>> Thanks for your proposal, but i get this error for every full new node.
>>> I can't do the next things.
>>>
>>> Regards
>>>
>>> Ilya Kasnacheev  于2020年4月23日周四 上午12:30写道:
>>>
>>>> Hello!
>>>>
>>>> If you have sufficient backups you can remove persistent data from that
>>>> node, restart it and re-add it to topology.
>>>>
>>>> Regards,
>>>> --
>>>> Ilya Kasnacheev
>>>>
>>>>
>>>> ср, 22 апр. 2020 г. в 19:11, 张立鑫 :
>>>>
>>>>> Thank you for your reply, i use lastest version, but it from gridgain
>>>>> maven nexus.
>>>>> I'm not have other node, and not configure affinity, just one server
>>>>> mode node and a client node.
>>>>> I have a lot of query, delete,update, this should make B+Tree change
>>>>> frequently. And i use SQL also.
>>>>> I'm not sure what happened, please give me some advice.
>>>>> Thanks again.
>>>>>
>>>>>
>>>>> Ilya Kasnacheev  于2020年4月22日周三 下午11:51写道:
>>>>>
>>>>>> Hello!
>>>>>>
>>>>>> What is the version used? I think there is some weirdness in
>>>>>> affinity, such as, Ignite tries to access non-existent partition.
>>>>>>
>>>>>> Do you have any specific affinity configuration of your caches?
>>>>>>
>>>>>> You also get the "B+Tree is corrupted" error, it may mean that your
>>>>>> persistent store is corrupted, but that's not certain.
>>>>>>
>>>>>> Regards,
>>>>>> --
>>>>>> Ilya Kasnacheev
>>>>>>
>>>>>>
>>>>>> пн, 20 апр. 2020 г. в 20:45, LixinZhang <
>>>>>> intelligentcodem...@gmail.com>:
>>>>>>
>>>>>>> Hi guys!
>>>>>>>
>>>>>>> Help me, please.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>>>>
>>>>>>


Re: Inconsistency in data stored via the Redis layer

2020-04-24 Thread Ilya Kasnacheev
Hello!

My recommendation is to use REST API instead of redis/memcached.

Regards,
-- 
Ilya Kasnacheev


пт, 24 апр. 2020 г. в 12:08, scriptnull :

> Hi,
>
> We are trying to use Apache Ignite via the Redis layer (
> https://apacheignite.readme.io/docs/redis ). While trying to store a
> string
> from a ruby redis client and retrieving back, we notice some inconsistency
> in the data. We believe that this has something to do with how Apache
> Ignite
> handles encoding. Would be great to learn about why this is happening and
> possible mitigation for the problem.
>
> Here is the problem in more detail.
>
> First we have a ruby object which we marshal to get the string
> representation of.
>
> ```
>  => {:id=>7833548, :ad_group_id=>"91254654888",
> :adwords_campaign_id=>548351, :name=>"mcdonald's_ (e)",
> :configured_status=>"ENABLED", :adwords_ad_account_id=>4798,
> :created_at=>Fri, 06 Dec 2019 08:18:34 UTC +00:00, :updated_at=>Mon, 20 Apr
> 2020 18:51:01 UTC +00:00, :targeting=>{"targeting_type"=>["Keyword"]},
> :tracking_url_template=>nil, :ad_group_type=>"Search-Standard",
> :content_bid_criterion_type_group=>nil,
> :final_urls=>["
> https://www.foodora.se/restaurant/s7fx/mcdonald-s-kungsgatan?";],
> :bidding_strategy_configuration=>{"bidding_strategy_type"=>"TARGET_CPA",
> "bids"=>[{"bids_type"=>"CpcBid", "bid"=>{"comparable_value_type"=>"Money",
> "micro_amount"=>100}, "cpc_bid_source"=>"ADGROUP",
> "xsi_type"=>"CpcBid"}, {"bids_type"=>"CpaBid",
> "bid"=>{"comparable_value_type"=>"Money", "micro_amount"=>1500},
> "xsi_type"=>"CpaBid"}]}, :labels=>"", :audiences=>nil,
> :mongo_core_object_updated_at=>Sat, 14 Dec 2019 13:38:30 UTC +00:00,
> :ad_rotation_mode=>nil, :system_dimensions_last_run_at=>Fri, 06 Mar 2020
> 00:31:07 UTC +00:00}
> ```
>
> We marshal this object to a string representation (encoding of this string
> is ASCII-8BIT)
> ```
> redis_client.set('key', Marshal.dump(obj))
> ```
>
> and here is the contents of the string stored in Apache Ignite (as seen via
> the redis-cli)
> ```
>
> "\x04\b{\x18:\aidi\x03\xcc\x87w:\x10ad_group_idI\"\x1091254654888\x06:\x06ET:\x18adwords_campaign_idi\x03\xef\xbf\xbd]\b:\tnameI\"\x14mcdonald's_
>
> (e)\x06;\aT:\x16configured_statusI\"\x0cENABLED\x06;\aT:\x1aadwords_ad_account_idi\x02\xef\xbf\xbd\x12:\x0fcreated_atU:
>
> ActiveSupport::TimeWithZone[\bIu:\tTime\r\xef\xbf\xbd\xef\xbf\xbd\x1d\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd.J\x06:\tzoneI\"\bUTC\x06;\aFI\"\bUTC\x06;\aT@
> \x0c:\x0fupdated_atU;\r[\bIu;\x0e\r\xef\xbf\xbd\x0e\x1e\xef\xbf\xbd\x0bX\x1c\xef\xbf\xbd\x06;\x0f@
> \x0b@\r@
> \x10:\x0etargeting{\x06I\"\x13targeting_type\x06;\aT[\x06I\"\x0cKeyword\x06;\aT:\x1atracking_url_template0:\x12ad_group_typeI\"\x14Search-Standard\x06;\aT:%content_bid_criterion_type_group0:\x0ffinal_urls[\x06I\"Bhttps://
> www.foodora.se/restaurant/s7fx/mcdonald-s-kungsgatan
> ?\x06;\aT:#bidding_strategy_configuration{\aI\"\x1abidding_strategy_type\x06;\aTI\"\x0fTARGET_CPA\x06;\aTI\"\tbids\x06;\aT[\a{\tI\"\x0ebids_type\x06;\aTI\"\x0bCpcBid\x06;\aTI\"\bbid\x06;\aT{\aI\"\x1acomparable_value_type\x06;\aTI\"\nMoney\x06;\aTI\"\x11micro_amount\x06;\aTi\x03@B
> \x0fI\"\x13cpc_bid_source\x06;\aTI\"\x0cADGROUP\x06;\aTI\"\rxsi_type\x06;\aTI\"\x0bCpcBid\x06;\aT{\b@
> \x1eI\"\x0bCpaBid\x06;\aT@
> {\a@\"I\"\nMoney\x06;\aT@$i\x03\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd@
> 'I\"\x0bCpaBid\x06;\aT:\x0blabelsI\"\x00\x06;\aF:\x0eaudiences0:!mongo_core_object_updated_atU;\r[\bIu;\x0e\r\xef\xbf\xbd\xef\xbf\xbd\x1d\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd\x06;\x0f@
> \x0b@\r@1
> :\x15ad_rotation_mode0:\"system_dimensions_last_run_atU;\r[\bIu;\x0e\r\xef\xbf\xbd\b\x1e\xef\xbf\xbd\x00\x00p|\x06;\x0f@
> \x0b@\r@4"
> ```
>
> If we try to store the same data in Redis, we can see the following
> contents
> via the redis-cli
> ```
>
> "\x04\b{\x18:\aidi\x03\xcc\x87w:\x10ad_group_idI\"\x1091254654888\x06:\x06ET:\x18adwords_campaign_idi\x03\xff]\b:\tnameI\"\x14mcdonald's_
>
> (e)\x06;\aT:\x16configured_statusI\"\x0cENABLED\x06;\aT:\x1aadwords_ad_account_idi\x02\xbe\x12:\x0fcreated_atU:
>
> ActiveSupport::TimeWithZone[\bIu:\tTime\r\xc8\xec\x1d\xc0\x92\xd3.J\x06:\tzoneI\"\bUTC\x06;\aFI\"\

Re: General error: "java.lang.ArrayIndexOutOfBoundsException: 32768"

2020-04-24 Thread Ilya Kasnacheev
Hello!

What do you mean by 'full new node'?

Do you know steps to reproduce this behavior?

Regards,
-- 
Ilya Kasnacheev


ср, 22 апр. 2020 г. в 19:37, 张立鑫 :

> Hello,
> Thanks for your proposal, but i get this error for every full new node. I
> can't do the next things.
>
> Regards
>
> Ilya Kasnacheev  于2020年4月23日周四 上午12:30写道:
>
>> Hello!
>>
>> If you have sufficient backups you can remove persistent data from that
>> node, restart it and re-add it to topology.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 22 апр. 2020 г. в 19:11, 张立鑫 :
>>
>>> Thank you for your reply, i use lastest version, but it from gridgain
>>> maven nexus.
>>> I'm not have other node, and not configure affinity, just one server
>>> mode node and a client node.
>>> I have a lot of query, delete,update, this should make B+Tree change
>>> frequently. And i use SQL also.
>>> I'm not sure what happened, please give me some advice.
>>> Thanks again.
>>>
>>>
>>> Ilya Kasnacheev  于2020年4月22日周三 下午11:51写道:
>>>
>>>> Hello!
>>>>
>>>> What is the version used? I think there is some weirdness in affinity,
>>>> such as, Ignite tries to access non-existent partition.
>>>>
>>>> Do you have any specific affinity configuration of your caches?
>>>>
>>>> You also get the "B+Tree is corrupted" error, it may mean that your
>>>> persistent store is corrupted, but that's not certain.
>>>>
>>>> Regards,
>>>> --
>>>> Ilya Kasnacheev
>>>>
>>>>
>>>> пн, 20 апр. 2020 г. в 20:45, LixinZhang >>> >:
>>>>
>>>>> Hi guys!
>>>>>
>>>>> Help me, please.
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>>
>>>>


Re: SQL MERGE INTO with SELECT UNION

2020-04-24 Thread Ilya Kasnacheev
Hello!

I think you can union a select over a temporary table of one row.
such as

select * from table (id bigint = ?, ...)


However, maybe you should just re-write your upsert with select and then
insert/update.
You're not gaining any more guarantees by using MERGE, as far as I know.

Regards,
-- 
Ilya Kasnacheev


чт, 23 апр. 2020 г. в 00:56, Courtney Robinson :

> My aim is to perform an upsert.
> Originally, my query was just doing a MERGE INTO with no UNION.
> Unfortunately Ignite if a row already exists, Ignite DOES NOT merge, it
> replaces the row. So any columns from the old row that are not included in
> the new MERGE will be set to NULL at the end of the operation.
> Looking around I found
> http://apache-ignite-users.70518.x6.nabble.com/INSERT-and-MERGE-statements-td28685.html
>  which
> suggests this is intended behaviour and not a bug.
>
> So I thought one way to do this with SQL is by doing a MERGE SELECT where
> the first SELECT gets the existing row and any columns not being updated
> are taken from the existing row. If no row matches the first select then
> nothing will be inserted (that's why I need the union) so the second SELECT
> is a list of literals of the columns currently being modified.
>
> In effect I'm doing an IF first SELECT take its data else use these
> literals. Ignite also doesn't support the MERGE USING syntax in H2
> http://www.h2database.com/html/commands.html#merge_using so I thought
> this might work.
>
> Using the MERGE SELECT UNION I can't get Ignite to parse the second select
> IFF the fields are placeholders i.e. ?
>
> In
>
>> *MERGE* *INTO* hypi_store_App(hypi_id,hypi_instanceId,hypi_created,
>> hypi_updated,hypi_createdBy,hypi_instance,hypi_app,hypi_release,
>> hypi_publisherRealm,hypi_publisherApp,hypi_publisherRelease,hypi_impl)(
>> *SELECT* ?,?,(IFNULL(*SELECT* hypi_created *FROM* hypi_store_App *WHERE*
>> hypi_instanceId = ? *AND* hypi_id = ?, 
>> *CURRENT_TIMESTAMP*())),?,?,?,?,?,?,?,?,?
>> *FROM* hypi_store_App r *WHERE* hypi_id = ? *AND* hypi_instanceId = ?
>>
>> *UNION**SELECT* 'a','a','a','a','a','a','a','a','a','a','a','a'
>> -- SELECT ?,?,?,?,?,?,?,?,?,?,?,?
>> -- SELECT ?,?,(IFNULL(SELECT hypi_created FROM hypi_store_App WHERE
>> hypi_instanceId = ? AND hypi_id = ?, CURRENT_TIMESTAMP())),?,?,?,?,?,?,?,?,?
>> );
>
>
> The query is parsed successfully if I use literals as in *SELECT* 'a','a',
> 'a','a','a','a','a','a','a','a','a','a' but SELECT
> ?,?,?,?,?,?,?,?,?,?,?,? will fail, same for the longer version above.
>
> The error is
>  Failed to parse query. Unknown data type: "?, ?"
> as in
>
> SQL Error [1001] [42000]: Failed to parse query. Unknown data type: "?,
>> ?"; SQL statement:
>> MERGE INTO
>> hypi_store_App(hypi_id,hypi_instanceId,hypi_created,hypi_updated,hypi_createdBy,hypi_instance,hypi_app,hypi_release,hypi_publisherRealm,hypi_publisherApp,hypi_publisherRelease,hypi_impl)(
>> SELECT ?,?,(IFNULL(SELECT hypi_created FROM hypi_store_App WHERE
>> hypi_instanceId = ? AND hypi_id = ?,
>> CURRENT_TIMESTAMP())),?,?,?,?,?,?,?,?,? FROM hypi_store_App r WHERE hypi_id
>> = ? AND hypi_instanceId = ?
>> UNION
>> -- SELECT 'a','a','a','a','a','a','a','a','a','a','a','a'
>>  SELECT ?,?,?,?,?,?,?,?,?,?,?,?
>> --SELECT ?,?,(IFNULL(SELECT hypi_created FROM hypi_store_App WHERE
>> hypi_instanceId = ? AND hypi_id = ?, CURRENT_TIMESTAMP())),?,?,?,?,?,?,?,?,?
>> ) [50004-197]
>
>
> the full stack trace from the server is below. Any suggestions? Is my
> query really not valid? Or is there another way to achieve a real merge
> instead of a replace without making multiple queries from the client?
>
>
> 2020-04-22 22:37:04.764 ERROR 52149 --- [ctor-#256%hypi%]
>> o.a.i.i.p.odbc.jdbc.JdbcRequestHandler   : Failed to execute SQL query
>> [reqId=53, req=JdbcQueryExecuteRequest [schemaName=PUBLIC, pageSize=1024,
>> maxRows=200, sqlQry=MERGE INTO
>> hypi_store_App(hypi_id,hypi_instanceId,hypi_created,hypi_updated,hypi_createdBy,hypi_instance,hypi_app,hypi_release,hypi_publisherRealm,hypi_publisherApp,hypi_publisherRelease,hypi_impl)(
>> SELECT ?,?,(IFNULL(SELECT hypi_created FROM hypi_store_App WHERE
>> hypi_instanceId = ? AND hypi_id = ?,
>> CURRENT_TIMESTAMP())),?,?,?,?,?,?,?,?,? FROM hypi_store_App r WHERE hypi_id
>> = ? AND h

Re: Best way to track if key was read more than X times?

2020-04-23 Thread Ilya Kasnacheev
Hello!

Yes, I think you can update an entry with EntryProcessor while also
returning it.

Regards,
-- 
Ilya Kasnacheev


ср, 22 апр. 2020 г. в 19:35, John Smith :

> Hi, akonresh understood, but then I would need another cache to keep track
> of those counts.
>
> Ilya would a EntryProcessor allow for that with the invoke? Because
> creating a wrapper I still need to track the counts.
>
> On Wed, 22 Apr 2020 at 12:10, Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I actually think that the optimal way is to have your own wrapper API
>> which is only source of cache gets and which does this accounting under the
>> hood.
>>
>> Then it can invoke the same cache entry to keep track of number of reads.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 21 апр. 2020 г. в 22:00, John Smith :
>>
>>> Hi I want to store a key/value and If that key has been accessed more
>>> than 3 times for example remove it. What is the best way to do this?
>>>
>>


Re: General error: "java.lang.ArrayIndexOutOfBoundsException: 32768"

2020-04-22 Thread Ilya Kasnacheev
Hello!

If you have sufficient backups you can remove persistent data from that
node, restart it and re-add it to topology.

Regards,
-- 
Ilya Kasnacheev


ср, 22 апр. 2020 г. в 19:11, 张立鑫 :

> Thank you for your reply, i use lastest version, but it from gridgain
> maven nexus.
> I'm not have other node, and not configure affinity, just one server mode
> node and a client node.
> I have a lot of query, delete,update, this should make B+Tree change
> frequently. And i use SQL also.
> I'm not sure what happened, please give me some advice.
> Thanks again.
>
>
> Ilya Kasnacheev  于2020年4月22日周三 下午11:51写道:
>
>> Hello!
>>
>> What is the version used? I think there is some weirdness in affinity,
>> such as, Ignite tries to access non-existent partition.
>>
>> Do you have any specific affinity configuration of your caches?
>>
>> You also get the "B+Tree is corrupted" error, it may mean that your
>> persistent store is corrupted, but that's not certain.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 20 апр. 2020 г. в 20:45, LixinZhang :
>>
>>> Hi guys!
>>>
>>> Help me, please.
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: Best way to track if key was read more than X times?

2020-04-22 Thread Ilya Kasnacheev
Hello!

I actually think that the optimal way is to have your own wrapper API which
is only source of cache gets and which does this accounting under the hood.

Then it can invoke the same cache entry to keep track of number of reads.

Regards,
-- 
Ilya Kasnacheev


вт, 21 апр. 2020 г. в 22:00, John Smith :

> Hi I want to store a key/value and If that key has been accessed more than
> 3 times for example remove it. What is the best way to do this?
>


Re: Is strong consistency supported in SQL mode?

2020-04-22 Thread Ilya Kasnacheev
Hello!

Our SQL is strongly consistent, but it is not transactional.

Regards,
-- 
Ilya Kasnacheev


вт, 21 апр. 2020 г. в 10:59, priyank :

> Hi,
> I see according to this article:
>
> https://www.gridgain.com/resources/blog/apache-cassandra-vs-apache-ignite-strong-consistency-and-transactions
> that Apache Ignite has support for strong consistency. The code example
> listed by them uses key-values.
>
> Is this true even when running Ignite in SQL mode?
>
> Thanks for your time!
> Regards,
> Priyank
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Automatically generate Code using java reflection

2020-04-22 Thread Ilya Kasnacheev
Hello!

This code is unrelated to your issue. It is needed for platform
interoperability.

Regards,
-- 
Ilya Kasnacheev


вт, 14 апр. 2020 г. в 19:13, Anthony :

> Evgenii,
> It also came to me if there is a similar code in C++ for the following
> java code as I did not find it. If not, Does it mean I need to configure it
> in xml file?
>
>  IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
> BinaryConfiguration bCfg = new BinaryConfiguration();
> bCfg.setCompactFooter(false);
> bCfg.setNameMapper(new BinaryBasicNameMapper(true));
> bCfg.setIdMapper(new BinaryBasicIdMapper(true));
>
> bCfg.setClassNames(Collections.singleton("org.apache.ignite.examples.datagrid.CrossClass"));
> igniteConfiguration.setBinaryConfiguration(bCfg);
>
> Thank you !
>
> On Mon, Apr 13, 2020 at 1:57 PM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Anthony,
>>
>> No, I don't think so. If you plan to use it from C++, then you will need
>> to configure QueryEntity.
>>
>> Evgenii
>>
>> пн, 13 апр. 2020 г. в 13:02, Anthony :
>>
>>> Thank you Evgenii ! BTW, Is there a same thing in c++ ?
>>>
>>> On Mon, Apr 13, 2020 at 9:30 AM Evgenii Zhuravlev <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> There is no need to create Query Entity if you already have
>>>> annotations. YOu can add
>>>> CacheConfiguration.setIndexedTypes(PersonKey.class, Person.class) and it
>>>> will be generated automatically based on you annotations.
>>>>
>>>> Evgenii
>>>>
>>>> пн, 13 апр. 2020 г. в 09:11, Anthony :
>>>>
>>>>> Hello,
>>>>> If I have the following java class:
>>>>>
>>>>> public class Person implements Serializable {
>>>>> /** */
>>>>> private static final AtomicLong ID_GEN = new AtomicLong();
>>>>>
>>>>> /** Person ID (indexed). */
>>>>> @QuerySqlField(index = true)
>>>>> public Long id;
>>>>>
>>>>> /** Organization ID (indexed). */
>>>>> @QuerySqlField(index = true)
>>>>> public Long orgId;
>>>>>
>>>>> /** First name (not-indexed). */
>>>>> @QuerySqlField
>>>>> public String firstName;
>>>>>
>>>>> /** Last name (not indexed). */
>>>>> @QuerySqlField
>>>>> public String lastName;
>>>>>
>>>>> /** Resume text (create LUCENE-based TEXT index for this field). */
>>>>> @QueryTextField
>>>>> public String resume;
>>>>>
>>>>> /** Salary (indexed). */
>>>>> @QuerySqlField(index = true)
>>>>> public double salary;
>>>>>
>>>>> /** Custom cache key to guarantee that person is always collocated
>>>>> with its organization. */
>>>>> private transient AffinityKey key;
>>>>>
>>>>>
>>>>>And I want to create a table, I need to write  the following code
>>>>> in java or put them in the config file. Is it possible to generate them
>>>>> automatically? As Java reflection seems can handle this?
>>>>>
>>>>> CacheConfiguration cacheCfg = new CacheConfiguration("Person");
>>>>> QueryEntity entity = new QueryEntity();
>>>>> entity.setKeyType("java.lang.Long");
>>>>> entity.setValueType("Person");
>>>>> LinkedHashMap map = new LinkedHashMap();
>>>>> map.put("orgId", "java.lang.Long");
>>>>> map.put("firstName", "java.lang.String");
>>>>> map.put("lastName", "java.lang.String");
>>>>> map.put("Resume", "java.lang.String");
>>>>> map.put("Salary", "java.lang.double");
>>>>> entity.setFields(map);
>>>>> entity.setIndexes(Collections.singletonList(new QueryIndex("orgId")));
>>>>> List queryEntities = new ArrayList<>();
>>>>> queryEntities.add(entity);
>>>>> cacheCfg.setQueryEntities(queryEntities);
>>>>> igniteConfiguration.setCacheConfiguration(cacheCfg);
>>>>> Ignite ignite = Ignition.start(igniteConfiguration);
>>>>> IgniteCache cache = ignite.getOrCreateCache("Person");
>>>>>
>>>>>


Re: General error: "java.lang.ArrayIndexOutOfBoundsException: 32768"

2020-04-22 Thread Ilya Kasnacheev
Hello!

What is the version used? I think there is some weirdness in affinity, such
as, Ignite tries to access non-existent partition.

Do you have any specific affinity configuration of your caches?

You also get the "B+Tree is corrupted" error, it may mean that your
persistent store is corrupted, but that's not certain.

Regards,
-- 
Ilya Kasnacheev


пн, 20 апр. 2020 г. в 20:45, LixinZhang :

> Hi guys!
>
> Help me, please.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Oracle BLOB, CLOB data type mapping with Ignite cache

2020-04-22 Thread Ilya Kasnacheev
Hello!

I assume you are talking about CacheJdbcPojoStore. Please keep in mind that
it's just an implementation of CacheStore and you can have your own
implementation (which will then support any types you want).

For CacheJdbcPojoStore you can supply your own transformer via
setTransformer(), which will convert fields to BLOB/CLOB/BINARY columns and
vice versa.

We also have CacheJdbcBlobStore but I think it's not that you want, since
it will store key/value in a single field in opaque form.

I don't see any existing support for BLOB or CLOB in CacheJdbcPojoStore or
default transformer. BINARY may map to byte[], although I'm not hundred
percent sure - it's in our JDBC, not our CacheStore implementation.

OTHER just means that this column holds a complex object (such as nested
POJO) which will likely not be accessible via our own JDBC. I don't think
it is meaningful for cache store.

I hope some of these pointers will be useful.

Regards,
-- 
Ilya Kasnacheev


ср, 15 апр. 2020 г. в 10:46, Harshvardhan Kadam :

> Hello,
>
> I am using Ignite 2.8.0 as cache layer with Oracle 11g as 3rd party
> persistence.
>
> I have couple of questions:
> 1) Does Ignite supports Oracle's BLOB data type? If yes, what should be the
> attribute type in generate POJO (Object or something else)?
> 2) As per Ignite official documentation, it supports Binary data type.
> Which
> data type of Oracle matches with Binary of Ignite? And again what should be
> attribute type in generate POJO?
> 3) Does Ignite supports Oracle's CLOB data type? If yes, what should be the
> attribute type in generate POJO (String or something else)?
> 4) What does TYPE_NAME 'Other' signifies in cache description obtained
> using
> sqlline '!describe' command?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failed to start Ignite.NET, check inner exception for details

2020-04-08 Thread Ilya Kasnacheev
Hello!

I think it is possible that you're using java jar files from other version
than 2.8.0. I recommend explicitly setting IGNITE_HOME environment variable
to path pointing to unzipped binary release.

Regards,
-- 
Ilya Kasnacheev


ср, 8 апр. 2020 г. в 13:09, siva :

> Hi,
>
> While starting ignite server using version 2.8 in .net core getting this
> exception
>
> * Failed to start Ignite.NET, check inner exception for details Invalid
> header on deserialization. Expected: 9 but was: 33*
>
> Configuration setting through  spring xml file.
>
> this is the config file  serverconfig.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1379/serverconfig.xml>
>
>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1379/Screenshot_1.png>
>
>
>
> This is the Sample code
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1379/Code-Screenshot_2.png>
>
>
>
>
> *Note:Only getting exception when configuration object setting through
> spring config file
> *
>
>
> Versions:
> Ignite : 2.8.0
> .NetCore : 3.1.101
>
>
>
> Thanks
>
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: CPP: Query timeout

2020-04-08 Thread Ilya Kasnacheev
Hello!

Unfortunately, this is not trivial, you should search your nodes' logs for
locNodeId=022a7f53 or localNodeId=022a7f53

Regards,
-- 
Ilya Kasnacheev


ср, 8 апр. 2020 г. в 11:51, nidhinms :

> Hi
>
> Do ignite provide a method to find ip address from this ID.
> 022a7f53-8628-409a-851a-e8ef62d050f7.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: CPP: Query timeout

2020-04-08 Thread Ilya Kasnacheev
Hello!

I can see the following messages repeating:
[10:17:09,530][WARNING][grid-timeout-worker-#23][diagnostic] Found long
running cache future [startTime=10:15:10.302, curTime=10:17:09.523,
fut=GridNearAtomicSingleUpdateFuture [reqState=Primary
[id=936138d2-e6e5-44f8-b731-7576c83e9334, opRes=false, expCnt=1, rcvdCnt=0,
primaryRes=false, done=false, waitFor=[
*022a7f53-8628-409a-851a-e8ef62d050f7*], rcvd=null],
super=GridNearAtomicAbstractUpdateFuture [remapCnt=100,
topVer=AffinityTopologyVersion [topVer=2697, minorTopVer=0],
remapTopVer=null, err=null, futId=1, super=GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=1168304564
[10:17:09,530][WARNING][grid-timeout-worker-#23][diagnostic] Found long
running cache future [startTime=10:15:13.911, curTime=10:17:09.523,
fut=GridNearAtomicSingleUpdateFuture [reqState=Primary
[id=936138d2-e6e5-44f8-b731-7576c83e9334, opRes=false, expCnt=1, rcvdCnt=0,
primaryRes=false, done=false, waitFor=[
*022a7f53-8628-409a-851a-e8ef62d050f7*], rcvd=null],
super=GridNearAtomicAbstractUpdateFuture [remapCnt=100,
topVer=AffinityTopologyVersion [topVer=2698, minorTopVer=0],
remapTopVer=null, err=null, futId=16385, super=GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=1764819750

Do you happen to have logs for *022a7f53-8628-409a-851a-e8ef62d050f7* node?
Ideally, thread dumps also.

Regards,
-- 
Ilya Kasnacheev


ср, 8 апр. 2020 г. в 10:58, nidhinms :

> Issue happened because one of ignite server nodes was down. Queries were
> successful after servers restarted. Attaching logs from client.
>
> ignite-1835519d.log
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2730/ignite-1835519d.log>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Unable to perform handshake within timeout..

2020-04-07 Thread Ilya Kasnacheev
Hello!

I think that maybe that's your balancer from earlier posts tries to talk
HTTP (or some other protocol) to our client port, confusing it.

Regards,
-- 
Ilya Kasnacheev


вт, 7 апр. 2020 г. в 05:23, kay :

> I have two nodes at remote server..
> I found a log
>
> [WARN] [grid-timeout-worker-#39][ClientListenerNioListener] Unable to
> perform handshake within timeout
>
> Why this logs appear??
> and How can i fix it??
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: C++ ODBC Example Question

2020-04-07 Thread Ilya Kasnacheev
Hello!

Please take a look at this example, it will store Organization from C++:
https://github.com/apache/ignite/blob/56975c266e7019f307bb9da42333a6db4e47365e/modules/platforms/cpp/examples/put-get-example/src/put_get_example.cpp

Some additional configuration will be needed to access data using SQL:
https://apacheignite-cpp.readme.io/docs/cross-platform-interoperability
https://www.gridgain.com/docs/latest/developers-guide/SQL/sql-key-value-storage

Regards,
-- 
Ilya Kasnacheev


вт, 7 апр. 2020 г. в 02:08, Anthony :

> Hello,
>
> For the following example, instead of storing the "Person" using ODBC, is
> it possible to build the "Person" in c++  and store to in the server?
>
> I still want to use ODBC to retrieve the data.
>
>
> https://github.com/apache/ignite/blob/56975c266e7019f307bb9da42333a6db4e47365e/modules/platforms/cpp/examples/odbc-example/src/odbc_example.cpp
>
>
> Thanks,
>
> Anthony
>


Re: CPP: Query timeout

2020-04-07 Thread Ilya Kasnacheev
Hello!

Can you please provide logs and thread dumps (collectible with jstack
) of all your Apache Ignite nodes?

Hard to pinpoint it otherwise.

Thanks,
-- 
Ilya Kasnacheev


пн, 6 апр. 2020 г. в 13:22, nidhinms :

> I tried to execute an INSERT query to my ignite cluster. It was working
> fine.
> After they update the ignite server instance with some new jars the query
> is
> not returning. Can some one guide me to rectify this issue.
>
> 1. Does peerLoadingEnabled=true has omething to do with this issue. Or does
> peerClassLoading work with CPP and Java ignite mixed setup.
> 2. Can I set timeout in my query with CPP.
> 3. Where I can find info about why a query is getting stalled.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite transactions stuck between PREPARED and COMMITTING

2020-04-03 Thread Ilya Kasnacheev
Hello!

I doubt that there's much expertise here on the behavior of 2.4.0.

Regards,
-- 
Ilya Kasnacheev


чт, 2 апр. 2020 г. в 20:34, rc :

> Hello experts,
>
> Cluster details: Running Ignite 2.4.0 with 3 servers and 4 clients.
>
> Observed transaction timeouts after a server node restarted. Upon further
> investigation, it was due to PME failures and also saw that there were some
> long running transactions on a cache for the same key for more than 9 days
> (duration=840832760ms). 2 client nodes had created the write (delete the
> key-value) transaction at around the same time with timeout set to 50ms.
>
> One of the transaction was stuck in state=PREPARED and the other in
> state=COMMITTING. Is there any known issue in 2.4.0 which causes this
> deadlock? Why did Ignite not honor the timeout value of 50ms and bail out?
>
> Regards,
> rc
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Error and Question about communication between java node and c++ node

2020-04-03 Thread Ilya Kasnacheev
Hello!

Local node's binary configuration is not equal to remote node's binary
configuration [locNodeId=155424bd-1c8e-48a2-83ff-26aa6cf9e7af,
rmtNodeId=db4efc6e-44c7-46ce-b7c0-154e387c0448,
locBinaryCfg={globIdMapper=org.apache.ignite.binary.BinaryBasicIdMapper,
compactFooter=false, globSerializer=null}, rmtBinaryCfg=null]

As explained in the docs, C++ nodes require compactFooter to be false, so
it is necessary to include  matching BinaryConfiguration in your Java
node's IgniteConfiguration. Tune it until the error goes away.
Please check
https://apacheignite-cpp.readme.io/docs/cross-platform-interoperability

Regards,
-- 
Ilya Kasnacheev


пт, 3 апр. 2020 г. в 17:24, Anthony :

> Attached is the log file from C++Side:
> [07:20:41,216][WARNING][main][G] Ignite work directory is not provided,
> automatically resolved to:
> C:\Users\harte\Desktop\work\apache-ignite-2.8.0-bin-try\apache-ignite-2.8.0-bin\work
> [07:20:41,344][INFO][main][IgniteKernal%myGrid]
>
> >>>__  
> >>>   /  _/ ___/ |/ /  _/_  __/ __/
> >>>  _/ // (7 7// /  / / / _/
> >>> /___/\___/_/|_/___/ /_/ /___/
> >>>
> >>> ver. 2.8.0#20200226-sha1:341b01df
> >>> 2020 Copyright(C) Apache Software Foundation
> >>>
> >>> Ignite documentation: http://ignite.apache.org
>
> [07:20:41,353][INFO][main][IgniteKernal%myGrid] Config URL: n/a
> [07:20:41,367][INFO][main][IgniteKernal%myGrid] IgniteConfiguration
> [igniteInstanceName=myGrid, pubPoolSize=8, svcPoolSize=8,
> callbackPoolSize=8, stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4,
> igfsPoolSize=8, dataStreamerPoolSize=8, utilityCachePoolSize=8,
> utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=8,
> sqlQryHistSize=1000, dfltQryTimeout=0,
> igniteHome=C:\Users\harte\Desktop\work\apache-ignite-2.8.0-bin-try\apache-ignite-2.8.0-bin,
> igniteWorkDir=C:\Users\harte\Desktop\work\apache-ignite-2.8.0-bin-try\apache-ignite-2.8.0-bin\work,
> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@5f0fd5a0,
> nodeId=155424bd-1c8e-48a2-83ff-26aa6cf9e7af, marsh=BinaryMarshaller [],
> marshLocJobs=false, daemon=false, p2pEnabled=true, netTimeout=5000,
> netCompressionLevel=1, sndRetryDelay=1000, sndRetryCnt=3,
> metricsHistSize=1, metricsUpdateFreq=2000,
> metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi
> [addrRslvr=null, sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10,
> reconDelay=2000, maxAckTimeout=60, soLinger=5, forceSrvMode=false,
> clientReconnectDisabled=false, internalLsnr=null,
> skipAddrsRandomization=false], segPlc=STOP, segResolveAttempts=2,
> waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=1,
> commSpi=TcpCommunicationSpi [connectGate=null,
> connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@604f2bd2,
> chConnPlc=null, enableForcibleNodeKill=false,
> enableTroubleshootingLog=false, locAddr=null, locHost=null, locPort=47100,
> locPortRange=100, shmemPort=-1, directBuf=true, directSndBuf=false,
> idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60,
> reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0,
> slowClientQueueLimit=0, nioSrvr=null, shmemSrv=null,
> usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true,
> filterReachableAddresses=false, ackSndThreshold=32, unackedMsgsBufSize=0,
> sockWriteTimeout=2000, boundTcpPort=-1, boundTcpShmemPort=-1,
> selectorsCnt=4, selectorSpins=0, addrRslvr=null,
> ctxInitLatch=java.util.concurrent.CountDownLatch@1d3ac898[Count = 1],
> stopping=false, metricsLsnr=null],
> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@1b73be9f,
> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [],
> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@366ac49b,
> addrRslvr=null,
> encryptionSpi=org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi@6ad59d92,
> clientMode=false, rebalanceThreadPoolSize=4, rebalanceTimeout=1,
> rebalanceBatchesPrefetchCnt=3, rebalanceThrottle=0,
> rebalanceBatchSize=524288, txCfg=TransactionConfiguration
> [txSerEnabled=false, dfltIsolation=REPEATABLE_READ,
> dfltConcurrency=PESSIMISTIC, dfltTxTimeout=0,
> txTimeoutOnPartitionMapExchange=0, deadlockTimeout=1,
> pessimisticTxLogSize=0, pessimisticTxLogLinger=1, tmLookupClsName=null,
> txManagerFactory=null, useJtaSync=false], cacheSanityCheckEnabled=true,
> discoStartupDelay=6, deployMode=SHARED, p2pMissedCacheSize=100,
> locHost=null, timeSrvPortBase=31100, timeSrvPortRange=100,
> failureDetectionTimeout=1, sysWorkerBlockedTimeout=null,
> clientFailureDetectionTimeout=3, metricsLogFreq=6, hadoopCfg=null,
> connectorCfg=ConnectorConfiguration [jettyPath=null, host=null, port=11211,
>

Re: remote ignite server and L4(for loadbalacing) with java thin client

2020-04-03 Thread Ilya Kasnacheev
Hello!

It's hard to say, what kind of load balancer it is? Can you dump network
packets on that port when you try to connect? Can you provide complete
stack trace?

Regards,
-- 
Ilya Kasnacheev


пт, 3 апр. 2020 г. в 11:55, kay :

> Hello,
> I have two nodes in each remote server and using java thin client for
> put/get/remove caching at my application.
> I'd like to put L4 or webserver between application server and Ignite
> Server
> for loadbalancing.
>
> there are informations of ports for my description and if IP is all same.
>
> 1) node1
>   - client connect port 12000
>   - server port 12001
>
> 2) node2
>   - client connect port 12002
>   - server port 12003
>
> 3) L4(Webserver)
>   - port number : 13000
>
> I used before in my source
> ClientConfiguration cfg = new ClientConfiguration().setAddresses(IP:12000,
> IP:12002);
>
> I already chage like this
>
> ClientConfiguration cfg = new ClientConfiguration().setAddresses(IP:13000);
>
> but it was got a ClientConnectionException: Ignite cluster is unavailable
> [sock=Socket[addr[. and so on...
>
> is it possible to use like this?? or Is there another way???
>
> Thank you
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Spelling and grammar check of subtitles for Ignite videos

2020-04-03 Thread Ilya Kasnacheev
Hello!

I could proof read them if nobody with native English will step in.

Regards,
-- 
Ilya Kasnacheev


пт, 3 апр. 2020 г. в 13:58, Maksim Stepachev :

> Hi, everyone!
>
> I'm going to translate Russian perfect videos about Apache Ignite in
> English and dubbing them after that. They will be loaded to my youtube
> channel. I need to help with grammar checking.
>
>  If somebody wants to take part, please let me know.
>
>
>


Re: Ignite.cache.loadcache.Does this method do Increamental Load?

2020-04-03 Thread Ilya Kasnacheev
Hello!

Yes, LoadCache will not overwrite keys with existing values.

Regards,
-- 
Ilya Kasnacheev


пн, 23 мар. 2020 г. в 21:35, nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com>:

> Hi
>
> I am trying to load the data into ignite cache using JDBC Pojo Store method
> ignite.cache("cacheName").loadCache(null).I have used this method and got
> the following results for the following scenarios.
>
> *Scenario-1:Trying to load the same key which is available in cache*
>
> In this case, the value part corresponding to the key is not updated in the
> cache based on the latest available record.
>
> *Scenario -2:When loading a key which is not present in cache.*
>
> in this case, it is appending the new key and value pair to the cache and
> preserving the old data.
>
>
> But my doubt is why in scenario-1, it is not  updating the value
> corresponding to the when i am  trying to load the same key.
>
> Does this method do incremental load?.
> Is this the expected behavior or do i need to set any additional property
> in
> the bean file.Attaching you the bean configuration of the cache.
>
> cache.xml
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2737/cache.xml>
>
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: DataRegionConfiguration update

2020-04-03 Thread Ilya Kasnacheev
Hello!

I'm not sure, but I think every persistent configuration works as you have
described. It will store unlimited data on disk but `maxSize' in off-heap,
and discard data from off-heap automatically.

Regards,
-- 
Ilya Kasnacheev


пт, 3 апр. 2020 г. в 13:20, Andrey Davydov :

> Hello, I expect that second configuration store all data on disk and no
> more then  config.node.memory.max byte in offheap. My tests show that this
> configuration works.
> What correct config to achieve this?
>
> On Wed, Apr 1, 2020 at 12:59 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> It should not be possible to change configuration of node when data is
>> already present.
>>
>> Moreover, page eviction settings are supposed to be ignored when
>> persistence is enabled:
>> https://apacheignite.readme.io/docs/evictions
>> Page replacement is prepared instead.
>>
>> I recommend filing an issue about this problem in Apache Ignite JIRA
>> because it is obviously unexpected, but I think that your expectations are
>> also off.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 27 мар. 2020 г. в 17:06, Andrey Davydov :
>>
>>>
>>>
>>> Hello,
>>>
>>>
>>>
>>> We have Ignite data directory from system with following data region
>>> configuration:
>>>
>>>
>>>
>>> >> class="org.apache.ignite.configuration.DataRegionConfiguration">
>>>
>>> >> value="myPersistDataRegion"/>
>>>
>>> >> value="true"/>
>>>
>>>
>>>
>>> >> value="${config.node.memory.initial}"/>
>>>
>>> >> value="${config.node.memory.max}"/>
>>>
>>>
>>>
>>> >> value="DISABLED"/>
>>>
>>> >> value="true"/>
>>>
>>> 
>>>
>>>
>>>
>>> When we update configuration to:
>>>
>>>
>>>
>>> >> class="org.apache.ignite.configuration.DataRegionConfiguration">
>>>
>>> >> value="myPersistDataRegion"/>
>>>
>>> >> value="true"/>
>>>
>>>
>>>
>>> >> value="${config.node.memory.initial}"/>
>>>
>>> >> value="${config.node.memory.max}"/>
>>>
>>> >> value="${config.node.memory.evict.threshold}"/>
>>>
>>>
>>>
>>> >> value="RANDOM_2_LRU"/>
>>>
>>> >> value="true"/>
>>>
>>> 
>>>
>>>
>>>
>>> And restart (exactly same system. Difference only in data region config)
>>> we got following exception. When we change configuration back, everything
>>> works OK and all data present.
>>>
>>> If there is any way to access data from old files with new settings?
>>>
>>>
>>>
>>> org.apache.ignite.IgniteException: Runtime failure on bounds:
>>> [lower=SearchRow [key=null, hash=0, cacheId=2077719173], upper=SearchRow
>>> [key=null, hash=0, cacheId=2077719173]]
>>>
>>> at
>>> org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:48)
>>> ~[ignite-core-2.7.6.jar:2.7.6]
>>>
>>> at
>>> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.advance(GridCacheQueryManager.java:2996)
>>> ~[ignite-core-2.7.6.jar:2.7.6]
>>>
>>> at
>>> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.onHasNext(GridCacheQueryManager.java:2965)
>>> ~[ignite-core-2.7.6.jar:2.7.6]
>>>
>>> at
>>> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
>>> ~[ignite-core-2.7.6.jar:2.7.6]
>>>
>>> at
>>> org.apache.ignite.internal.util.lang.GridIteratorAdapte

Re: Multiple heap support finally landing into Dragonwell8 JDK

2020-04-03 Thread Ilya Kasnacheev
Hello!

I'm not sure since I don't see that much traction for onheap/near cache or
any reported problems.

Regards,
-- 
Ilya Kasnacheev


пт, 3 апр. 2020 г. в 13:20, kimec.ethome.sk :

> Hi guys,
>
> FYI https://github.com/alibaba/dragonwell8/issues/90
>
> Do you see any use case within Apache Ignite? I can naively imagine a
> dedicated heap for each heap based near cache or something along the
> lines.
>
> Kamil Mišúth
>


Re: Comma in field is not supported by COPY command?

2020-04-02 Thread Ilya Kasnacheev
Hello!

I see that you have already filed
https://issues.apache.org/jira/browse/IGNITE-12852

Regards,
-- 
Ilya Kasnacheev


ср, 1 апр. 2020 г. в 14:58, 18624049226 <18624049...@163.com>:

> Hi community,
>
> CREATE TABLE test(a int,b varchar(100),c int,PRIMARY key(a));
>
> a.csv:
> 1,"a,b",2
>
> COPY FROM '/data/a.csv' INTO test (a,b,c) FORMAT CSV;
>
> The copy command fails because there is a comma in the second field,but
> this is a fully legal and compliant CSV format,how can I avoid this
> problem?or it is a bug?
>
>


Re: ClusterTopologyServerNotFoundException

2020-04-02 Thread Ilya Kasnacheev
Hello!

Do you have a reproducer for this behavior?

I have not seen persistent clusters without baseline topology for some
time, there may be issues.

Regards,
-- 
Ilya Kasnacheev


пт, 20 мар. 2020 г. в 18:53, prudhvibiruda :

> Hi ,
>
> Please don't be confused.
> Our plan is to use more  server nodes further. But as of now we have only
> one server node.
> Since we are planning for more server nodes we are using
> CacheMode.Replicated.
> We don't want our nodes not to start just because other server nodes are
> not
> working , that's why we didn't define baseline topology.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Using SQL cache with javax cache api

2020-04-02 Thread Ilya Kasnacheev
Hello!

Ignite cannot store multi-columnstables as maps in cache. Those columns
will still be stored as BinaryObject/POJO, and you only confuse Ignite
internal by pretending that it is HashMap.

I recommend getting rid of HashMap in table declaration and using
ignite.binary().builder("typeName") instead - use same type name in
value_type and in builder().

Also you should not put ID into value, it makes no sense.

Regards,
-- 
Ilya Kasnacheev


ср, 1 апр. 2020 г. в 09:10, Dominik Przybysz :

> Hi,
> I created SQL cache, but sometimes I want to use it via javax.cache.api
> and it doesn't work.
>
> I created cache with SQL:
>
> CREATE TABLE IF NOT EXISTS Person6 (
>   id varchar primary key ,
>   city_id int,
>   name varchar,
>   age int,
>   company varchar
> ) WITH
> "template=replicated,backups=1,wrap_key=false,value_type=java.util.HashMap,cache_name=Person6";
>
> insert into PERSON6(ID, CITY_ID, NAME, AGE, COMPANY) values (
> '1', 1, 'TEST', 20, 'Bla'
> );
>
> insert into PERSON6(ID, CITY_ID, NAME, AGE, COMPANY) values (
> '2', 1, 'TEST2', 20, 'Bla 1'
> );
>
> Next I created client and fetching data with SqlFieldQuery works without
> problems:
>
> SqlFieldsQuery query = new SqlFieldsQuery("SELECT * from Person6");
> FieldsQueryCursor> cursor = cache.query(query);
>
> but when I tried to query the cache with get:
>
> IgniteCache cache = ignite.cache("Person6");
> System.out.println("CacheSize: " + cache.size(CachePeekMode.PRIMARY));
> System.out.println("1: " + cache.get("1"));
>
> I received:
>
> CacheSize: 2
> 1: null
>
> To solve it I added withKeepBinary():
>
> IgniteCache cache =
> ignite.cache("Person6").withKeepBinary();
>
> and then I received valid data:
>
> CacheSize: 2
> 1: java.util.HashMap [idHash=660595570, hash=-1179353910, CITY_ID=1,
> ID=null, NAME=TEST, AGE=20, COMPANY=Bla]
>
> but now I cannot add HashMap to the cache:
>
> Map value = new HashMap<>();
> value.put("ID", uuid);
> value.put("CITY_ID", 1);
> value.put("NAME", "c");
> value.put("AGE", 90);
> value.put("COMPANY", "A");
> cache. put(uuid, value);
>
> throws:
>
> Exception in thread "main" javax.cache.CacheException: class
> org.apache.ignite.IgniteCheckedException: Unexpected binary object class
> [type=class
> org.apache.ignite.internal.processors.cacheobject.UserCacheObjectImpl]
>
> and adding works only with objects created with BinaryObjectBuilder.
>
> Is it expected behaviour for caches String->HashMap?
>
> --
> Pozdrawiam / Regards,
> Dominik Przybysz
>


Re: QueryParallelism

2020-04-02 Thread Ilya Kasnacheev
Hello!

I think that you need a custom cache template (as per
https://apacheignite-sql.readme.io/docs/create-table ) with 
and then create all caches with TEMPLATE=templateName instead of
PARTITIONED.

You may also need to create indexes with PARALLEL 24.

Regards,
-- 
Ilya Kasnacheev


ср, 1 апр. 2020 г. в 14:11, Alessandro Fogli :

> Hello,
>
> Thanks for your answer! I would like to run a TPC-H benchmark but none of
> the queries uses more than a single core. I used these indices:
>
> CREATE INDEX i_n_regionkey ON nation (n_regionkey);
> CREATE INDEX i_s_nationkey ON supplier (s_nationkey);
> CREATE INDEX i_c_nationkey ON customer (c_nationkey);
> CREATE INDEX i_ps_suppkey ON partsupp (ps_suppkey);
> CREATE INDEX i_ps_partkey ON partsupp (ps_partkey);
> CREATE INDEX i_o_custkey ON orders (o_custkey);
> CREATE INDEX i_l_orderkey ON lineitem (l_orderkey);
> CREATE INDEX i_l_suppkey_partkey ON lineitem (l_partkey, l_suppkey);
>
> Even when I changed the query thread pool size I was unable to use more
> than one thread for a single query.
> I'm attaching you the file with which I created the tables. Thanks
>
> Best,
> Alessandro
>
>
> Il giorno 1 apr 2020, alle ore 09:13, Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> ha scritto:
>
> Hello!
>
> Can you please specify what is the operation that you want to parallelize?
> Are you sure it uses index?
>
> Please note that you may also need to increase query thread pool size
> since its threads are used up by parallelization.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 1 апр. 2020 г. в 05:22, Alessandro Fogli  >:
>
>> Hello, I would like to know if there is a way to increase the level of
>> parallelism within a single node. The Default value appears to be one. I
>> already tried adding the property “>  value="32"/>" in the xml file but it didn't work. Thanks
>>
>> Best regards,
>> Alessandro
>>
>
>


Re: Failed to bind to any [host:port] from the range portFrom=10900 , portTo=11000

2020-04-02 Thread Ilya Kasnacheev
Hello!

ClientConnectorConfiguration is needed to allow other clients to connect to
local node, and not for connecting to remote nodes.

ClientConfiguration/startClient() should be used instead.

Regards,
-- 
Ilya Kasnacheev


пт, 20 мар. 2020 г. в 07:23, AravindJP :

> Hi Stephen,
>  i was using IgniteConfiguration to connect , please find below the full code
>
>   
>   
>   Ignition.setClientMode(true);
>   ClientConnectorConfiguration cfg=new 
> ClientConnectorConfiguration()
>   .setHost("xx.xx.xx.xx")
>   .setPort(47500);
>   
>   IgniteConfiguration ignitecfg = new IgniteConfiguration();
>   ignitecfg.setClientConnectorConfiguration(cfg);
>   
>   Ignite ignite =Ignition.start(ignitecfg);
>
>   final String CACHE_NAME = "put-get-example";
>
>   
> IgniteDataStreamer 
> dataSteamer=ignite.dataStreamer("put-get-example");
> Address val = new Address("1545 Sample 1", 94612);
> Integer key = 1;
> for (int i = 0; i < 100; i++) {
>val = new Address("1545 Jackson Street "+i, 94612);
>   dataSteamer.addData("i"+i, val);
> }
>
>
>
>
> --
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>


Re: DataRegionConfiguration update

2020-04-01 Thread Ilya Kasnacheev
Hello!

It should not be possible to change configuration of node when data is
already present.

Moreover, page eviction settings are supposed to be ignored when
persistence is enabled:
https://apacheignite.readme.io/docs/evictions
Page replacement is prepared instead.

I recommend filing an issue about this problem in Apache Ignite JIRA
because it is obviously unexpected, but I think that your expectations are
also off.

Regards,
-- 
Ilya Kasnacheev


пт, 27 мар. 2020 г. в 17:06, Andrey Davydov :

>
>
> Hello,
>
>
>
> We have Ignite data directory from system with following data region
> configuration:
>
>
>
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>
>  value="myPersistDataRegion"/>
>
>  value="true"/>
>
>
>
>  value="${config.node.memory.initial}"/>
>
>  value="${config.node.memory.max}"/>
>
>
>
>  value="DISABLED"/>
>
> 
>
> 
>
>
>
> When we update configuration to:
>
>
>
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>
>  value="myPersistDataRegion"/>
>
>  value="true"/>
>
>
>
>  value="${config.node.memory.initial}"/>
>
>  value="${config.node.memory.max}"/>
>
>  value="${config.node.memory.evict.threshold}"/>
>
>
>
>  value="RANDOM_2_LRU"/>
>
> 
>
> 
>
>
>
> And restart (exactly same system. Difference only in data region config)
> we got following exception. When we change configuration back, everything
> works OK and all data present.
>
> If there is any way to access data from old files with new settings?
>
>
>
> org.apache.ignite.IgniteException: Runtime failure on bounds:
> [lower=SearchRow [key=null, hash=0, cacheId=2077719173], upper=SearchRow
> [key=null, hash=0, cacheId=2077719173]]
>
> at
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:48)
> ~[ignite-core-2.7.6.jar:2.7.6]
>
> at
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.advance(GridCacheQueryManager.java:2996)
> ~[ignite-core-2.7.6.jar:2.7.6]
>
> at
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.onHasNext(GridCacheQueryManager.java:2965)
> ~[ignite-core-2.7.6.jar:2.7.6]
>
> at
> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
> ~[ignite-core-2.7.6.jar:2.7.6]
>
> at
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
> ~[ignite-core-2.7.6.jar:2.7.6]
>
> at
> ru.exampl.data.appl.service.business.ModelService.findRunningModels(ModelService.java:945)
> ~[appl.jar:?]
>
> at
> ru.exampl.data.appl.service.business.LocalEnvironmentService.initializeEnvironment(LocalEnvironmentService.java:192)
> ~[appl.jar:?]
>
> at
> ru.exampl.data.appl.service.business.LocalEnvironmentService.afterIgniteSet(LocalEnvironmentService.java:125)
> ~[appl.jar:?]
>
> at
> ru.exampl.data.appl.service.AppServiceNew.lambda$execInner$2(AppServiceNew.java:228)
> ~[appl.jar:?]
>
> at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
> ~[?:1.8.0_242]
>
> at
> ru.exampl.data.appl.service.AppServiceNew.execInner(AppServiceNew.java:228)
> ~[appl.jar:?]
>
> at
> ru.exampl.data.appl.service.AppServiceNew.execInLock(AppServiceNew.java:181)
> ~[appl.jar:?]
>
> at
> ru.exampl.data.appl.service.AppServiceNew.execute(AppServiceNew.java:139)
> [appl.jar:?]
>
> at
> org.apache.ignite.internal.processors.service.GridServiceProcessor$3.run(GridServiceProcessor.java:1394)
> [ignite-core-2.7.6.jar:2.7.6]
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [?:1.8.0_242]
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [?:1.8.0_242]
>
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
>
> Caused by:
> org.apache.ignite.internal.processors.cache.persi

Re: QueryParallelism

2020-04-01 Thread Ilya Kasnacheev
Hello!

Can you please specify what is the operation that you want to parallelize?
Are you sure it uses index?

Please note that you may also need to increase query thread pool size since
its threads are used up by parallelization.

Regards,
-- 
Ilya Kasnacheev


ср, 1 апр. 2020 г. в 05:22, Alessandro Fogli :

> Hello, I would like to know if there is a way to increase the level of
> parallelism within a single node. The Default value appears to be one. I
> already tried adding the property “  value="32"/>" in the xml file but it didn't work. Thanks
>
> Best regards,
> Alessandro
>


Re: Is there any implementating way to use eclipselink, jpql (em.createQuery ) with ignite?

2020-03-31 Thread Ilya Kasnacheev
Hello!

Please refer to Cache Store examples like this one:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcStoreExample.java

Write through to RDBMS is not directly related to eclipselink, but there is
no reason why they won't work together.

Regards,
-- 
Ilya Kasnacheev


пн, 30 мар. 2020 г. в 11:34, usrsu :

> Hi,
>
> Im using eclipselink. The given example working and able to do ignite cache
> from application. How push/pull cache data into rdbms table. any github
> link
> will be helpfull
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to manual rebalance

2020-03-31 Thread Ilya Kasnacheev
Hello!

Actually, we have this feature implemented thoroughly in the form of
Baseline Topology:
https://apacheignite.readme.io/docs/baseline-topology

If a node goes away, rebalancing will only start when baseline topology is
updated. In 2.8.0, auto-adjust after timeout may be configured. It is of
limited use with non-persistent clusters, I'm not sure what is its current
status.

Regards,
-- 
Ilya Kasnacheev


вт, 31 мар. 2020 г. в 15:26, krkumar24061...@gmail.com <
krkumar24061...@gmail.com>:

> Hi Guys - How do i control the rebalancing programmatically i.e. I don't
> want
> the rebalancing to happen immediately when node goes out of the cluster (
> most of the times, graceful shutdown for updates). I will have rest
> endpoint
> or some command through i will initiate the rebalance if at all required in
> case of a genuine crash??
>
> May be this is a basic question but your expert advise will help
>
> Thanx and Regards,
> KR Kumar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there a limit to run multiple Ignite clients on a single host ?

2020-03-26 Thread Ilya Kasnacheev
Hello!

There is no such limitation and our unit tests routinely start dozens of
nodes on the same physical (and virtual) machine.

I suggest checking your firewall, discovery settings, and logs for possible
mistakes, such as insufficient port ranges or closed ports.

Regards,
-- 
Ilya Kasnacheev


чт, 26 мар. 2020 г. в 13:57, userx :

> Hi all,
>
> I haven't seen that in the documentation but is there a limit to the number
> of Ignite clients (all different jvms) to be running on a single physical
> machine. The ignite servers work on a different physical machine.
>
> Also if i intend to use VMIPFinder then should the client physical machine
> be mentioned in the addresses property of the class.
>
> The problem is that i have 6 clients running on the same machine, except
> one, non of the clients are able to interact with the servers in the data
> grid. Also the client on the physical machine was discovered automatically
> and i havent provided the physical machine's reference in the addresses
> property.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite memory leaks in 2.8.0

2020-03-26 Thread Ilya Kasnacheev
Hello!

I have filed an issue https://issues.apache.org/jira/browse/IGNITE-12840

Please add any relevant details if you have them, fixes are also welcome.

Regards,

On 2020/03/23 20:20:05, Andrey Davydov  wrote: 
> Sorry, It was to  users.70518.x6.nabble.com/Ignite-2-8-0-Heap-mem-issue-td31755.html> thread
> 
> 
> 
> Andrey.
> 
> 
> 
>  **От:**[Andrey Davydov](mailto:andrey.davy...@gmail.com)  
>  **Отправлено:** 23 марта 2020 г. в 23:00  
>  **Кому:**[user@ignite.apache.org](mailto:user@ignite.apache.org)  
>  **Тема:** Re: Ignite memory leaks in 2.8.0
> 
> 
> 
> It seems detached connection NEVER become attached to thread other it was
> born. Because borrow method always return object related to caller thread.
> I.e. all detached connection borned in joined thread are not collectable
> forewer.
> 
> 
> 
> So possible reproduce scenario: start separate thread. Run in this thread some
> logic that creates detached connection, finish and join thread. Remove link to
> thread. Repeat.
> 
> 
> 
> пн, 23 мар. 2020 г., 15:49 Taras Ledkov
> <[tled...@gridgain.com](mailto:tled...@gridgain.com)>:
> 
> > Hi,  
> >  
> > Thanks for your investigation.  
> > Root cause is clear. What use-case is causing the leak?  
> >  
> > I've created the issue to remove mess ThreadLocal logic from
> ConnectionManager. [1]  
> > We 've done it in GG Community Edition and it works OK.  
> >  
> > [1]. 
> 
> >
> 
> > On 21.03.2020 22:50, Andrey Davydov wrote:
> 
> >
> 
> >> A simple diagnostic utility I use to detect these problems:
> 
> >>
> 
> >>  
> >>
> 
> >> import java.lang.ref.WeakReference;  
> > import java.util.ArrayList;  
> > import java.util.LinkedList;  
> > import java.util.List;  
> > import org.apache.ignite.Ignite;  
> > import org.apache.ignite.internal.GridComponent;  
> > import org.apache.ignite.internal.IgniteKernal;  
> > import org.apache.logging.log4j.LogManager;  
> > import org.apache.logging.log4j.Logger;  
> >  
> > public class IgniteWeakRefTracker {  
> >  
> > private static final Logger LOGGER =
> LogManager.getLogger(IgniteWeakRefTracker.class);  
> >  
> > private final String clazz;  
> > private final String testName;  
> > private final String name;  
> > private final WeakReference innerRef;  
> > private final List> componentRefs = new
> ArrayList<>(128);  
> >  
> > private static final LinkedList refs = new
> LinkedList<>();  
> >  
> > private IgniteWeakRefTracker(String testName, Ignite ignite) {  
> > this.clazz = ignite.getClass().getCanonicalName();  
> > this.innerRef = new WeakReference<>(ignite);  
> > [this.name](http://this.name) = 
> > [ignite.name](http://ignite.name)();  
> > this.testName = testName;  
> >  
> > if (ignite instanceof IgniteKernal) {  
> > IgniteKernal ik = (IgniteKernal) ignite;  
> > List components = ik.context().components();  
> > for (GridComponent c : components) {  
> > componentRefs.add(new WeakReference<>(c));  
> > }  
> > }  
> > }  
> >  
> > public static void register(String testName, Ignite ignite) {  
> > refs.add(new IgniteWeakRefTracker(testName, ignite));  
> > }  
> >  
> > public static void trimCollectedRefs() {  
> >  
> > List toRemove = new ArrayList<>();  
> >  
> > for (IgniteWeakRefTracker ref : refs) {  
> > if (ref.isIgniteCollected()) {  
> > LOGGER.info("Collected ignite: ignite {} from test {}",
> ref.getIgniteName(), ref.getTestName());  
> > toRemove.add(ref);  
> > if (ref.igniteComponentsNonCollectedCount() != 0) {  
> > throw new IllegalStateException("Non collected
> components for collected ignite.");  
> > }  
> > } else {  
> > LOGGER.warn("Leaked ignite: ignite {} from test {}",
> ref.getIgniteName(), ref.getTestName());  
> > }  
> > }  
> >  
> > refs.removeAll(toRemove);  
> >  
> > LOGGER.info("Leaked ignites count:  {}", refs.size());  
> >  
> > }  
> >  
> > public static int getLeakedSize() {  
> > return refs.size();  
> > }  
> >  
> > public boolean isIgniteCollected() {  
> > return innerRef.get() == null;  
> > }  
> >  
> > public int igniteComponentsNonCollectedCount() {  
> > int res = 0;  
> >  
> > for (WeakReference cr : componentRefs) {  
> > GridComponent gridComponent = cr.get();  
> > if (gridComponent != null) {  
> > LOGGER.warn("Uncollected component: {}",
> gridComponent.getClass().getSimpleName());  
> > res++;  
> > }  
> > }  
> >  
> > return res;  
> > }  
> >  
> > public String getClazz() {  
> > return clazz;  
> >   

Re: No ignitevisorcmd.sh in Ignite 2.8

2020-03-25 Thread Ilya Kasnacheev
Hello!

We hope to ship a fix for it in 2.8.1:
https://issues.apache.org/jira/browse/IGNITE-12757

Until then, you can apply recommendation from the linked thread.

Regards,
-- 
Ilya Kasnacheev


вт, 24 мар. 2020 г. в 14:10, joaogoncalves :

> Hi again
>
> It happened to be the same as  Thank you for your help
> <
> http://apache-ignite-users.70518.x6.nabble.com/Unable-to-connect-to-Ignite-Visor-Console-in-Ignite-2-8-0-td31628.html>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite memory leaks in 2.8.0

2020-03-25 Thread Ilya Kasnacheev
Hello!

I have filed an issue about this: 
https://issues.apache.org/jira/browse/IGNITE-12837

Please feel free to contribute or draw attention to it.

Regards,

On 2020/03/18 15:37:26, Andrey Davydov  wrote: 
> Hello,
> 
> 
> 
> There are at least two way link to IgniteKernal leaks to GC root and makes it
> unavailable for GC.
> 
> 
> 
>   1. The first one:
> 
> 
> 
> this - value: org.apache.ignite.internal.IgniteKernal #1
> 
> <\- grid - class: org.apache.ignite.internal.GridKernalContextImpl, value:
> org.apache.ignite.internal.IgniteKernal #1
> 
> <\- ctx - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing, value:
> org.apache.ignite.internal.GridKernalContextImpl #2
> 
> <\- this$0 - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing #2
> 
> <\- serializer - class: org.h2.util.JdbcUtils, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10 #1
> 
> <\- [5395] - class: java.lang.Object[], value: org.h2.util.JdbcUtils class
> JdbcUtils
> 
> <\- elementData - class: java.util.Vector, value: java.lang.Object[] #37309
> 
> <\- classes - class: sun.misc.Launcher$AppClassLoader, value: java.util.Vector
> #31
> 
> <\- contextClassLoader (thread object) - class: java.lang.Thread, value:
> sun.misc.Launcher$AppClassLoader #1
> 
> 
> 
> org.h2.util.JdbcUtils has static field JavaObjectSerializer serializer, which
> see IgniteKernal via IgniteH2Indexing. It make closed and stopped IgniteKernal
> non collectable by GC.
> 
> If some Ignites run in same JVM, JdbcUtils will always use only one, and it
> can cause some races.
> 
> 
> 
>   2. The second way:
> 
> 
> 
> this - value: org.apache.ignite.internal.IgniteKernal #2
> 
> <\- grid - class: org.apache.ignite.internal.GridKernalContextImpl, value:
> org.apache.ignite.internal.IgniteKernal #2
> 
> <\- ctx - class: org.apache.ignite.internal.processors.cache.GridCacheContext,
> value: org.apache.ignite.internal.GridKernalContextImpl #1
> 
> <\- cctx - class:
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry,
> value: org.apache.ignite.internal.processors.cache.GridCacheContext #24
> 
> <\- parent - class:
> org.apache.ignite.internal.processors.cache.GridCacheMvccCandidate, value:
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry
> #4
> 
> <\- [0] - class: java.lang.Object[], value:
> org.apache.ignite.internal.processors.cache.GridCacheMvccCandidate #1
> 
> <\- elements - class: java.util.ArrayDeque, value: java.lang.Object[] #43259
> 
> <\- value - class: java.lang.ThreadLocal$ThreadLocalMap$Entry, value:
> java.util.ArrayDeque #816
> 
> <\- [119] - class: java.lang.ThreadLocal$ThreadLocalMap$Entry[], value:
> java.lang.ThreadLocal$ThreadLocalMap$Entry #51
> 
> <\- table - class: java.lang.ThreadLocal$ThreadLocalMap, value:
> java.lang.ThreadLocal$ThreadLocalMap$Entry[] #21
> 
> <\- threadLocals (thread object) - class: java.lang.Thread, value:
> java.lang.ThreadLocal$ThreadLocalMap #2
> 
> 
> 
> Link to IgniteKernal leaks to ThreadLocal variable, so when we start/stop many
> instances of Ignite in same jvm during testing, we got many stopped “zomby”
> ignites on ThreadLocal context of main test thread and it cause OutOfMemory
> after some dozens of tests.
> 
> 
> 
> Andrey.
> 
> 
> 
> 


Re: Re: RE: Re: Unsafe usage of org.h2.util.JdbcUtils in Ignite

2020-03-25 Thread Ilya Kasnacheev
Hello!

I have never seen a stack trace like this one. Can you provide a reproducer
for this behavior?

Regards,
-- 
Ilya Kasnacheev


пт, 20 мар. 2020 г. в 20:20, Andrey Davydov :

> Hello,
>
> Current implementation is really unsafe for multiple Ignite in same JVM.
> In tests for our system when we stop/start nodes in different order we get
> following error:
>
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
> initialize system DB connection:
> jdbc:h2:mem:fd191fac-c2f1-4398-bf8a-0dcddf651830;LOCK_MODE=3;MULTI_THREADED=1;DB_CLOSE_ON_EXIT=FALSE;DEFAULT_LOCK_TIMEOUT=1;FUNCTIONS_IN_SCHEMA=true;OPTIMIZE_REUSE_RESULTS=0;QUERY_CACHE_SIZE=0;MAX_OPERATION_MEMORY=0;BATCH_JOINS=1;ROW_FACTORY="org.apache.ignite.internal.processors.query.h2.opt.H2PlainRowFactory";DEFAULT_TABLE_ENGINE=org.apache.ignite.internal.processors.query.h2.opt.GridH2DefaultTableEngine
> at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1402)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117)
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1035)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:921)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
> at com.example.testutils.TestNode.start(TestNode.java:75)
> ... 38 more
> Caused by: class
> org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
> initialize system DB connection:
> jdbc:h2:mem:fd191fac-c2f1-4398-bf8a-0dcddf651830;LOCK_MODE=3;MULTI_THREADED=1;DB_CLOSE_ON_EXIT=FALSE;DEFAULT_LOCK_TIMEOUT=1;FUNCTIONS_IN_SCHEMA=true;OPTIMIZE_REUSE_RESULTS=0;QUERY_CACHE_SIZE=0;MAX_OPERATION_MEMORY=0;BATCH_JOINS=1;ROW_FACTORY="org.apache.ignite.internal.processors.query.h2.opt.H2PlainRowFactory";DEFAULT_TABLE_ENGINE=org.apache.ignite.internal.processors.query.h2.opt.GridH2DefaultTableEngine
> at
> org.apache.ignite.internal.processors.query.h2.ConnectionManager.connectionNoCache(ConnectionManager.java:213)
> at
> org.apache.ignite.internal.processors.query.h2.ConnectionManager.(ConnectionManager.java:152)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(IgniteH2Indexing.java:2070)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.start(GridQueryProcessor.java:256)
> at
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1978)
> at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1212)
> ... 46 more
> Caused by: java.sql.SQLException: No suitable driver found for
> jdbc:h2:mem:fd191fac-c2f1-4398-bf8a-0dcddf651830;LOCK_MODE=3;MULTI_THREADED=1;DB_CLOSE_ON_EXIT=FALSE;DEFAULT_LOCK_TIMEOUT=1;FUNCTIONS_IN_SCHEMA=true;OPTIMIZE_REUSE_RESULTS=0;QUERY_CACHE_SIZE=0;MAX_OPERATION_MEMORY=0;BATCH_JOINS=1;ROW_FACTORY="org.apache.ignite.internal.processors.query.h2.opt.H2PlainRowFactory";DEFAULT_TABLE_ENGINE=org.apache.ignite.internal.processors.query.h2.opt.GridH2DefaultTableEngine
> at java.sql.DriverManager.getConnection(DriverManager.java:689)
> at java.sql.DriverManager.getConnection(DriverManager.java:270)
> at
> org.apache.ignite.internal.processors.query.h2.ConnectionManager.connectionNoCache(ConnectionManager.java:206)
> ... 51 more
>
>
> On Thu, Mar 19, 2020 at 6:52 PM Andrey Davydov 
> wrote:
>
>> It seems like moving in right way =) Let wait for release.
>>
>>
>>
>> Andrey.
>>
>>
>>
>> *От: *Andrey Mashenkov 
>> *Отправлено: *19 марта 2020 г. в 16:28
>> *Кому: *user@ignite.apache.org
>> *Тема: *Re: RE: Re: Unsafe usage of org.h2.util.JdbcUtils in Ignite
>>
>>
>>
>> Hi,
>>
>>
>>
>> In Apache Ignite master branch I see a separate class
>> H2JavaObjectSerializer that implements JavaObjectSerializer.
>>
>> Seems, this won't be released in 2.8
>>
>> https://issues.apache.org/jira/browse/IGNITE-12609
>>
>>
>>
>> On Thu, Mar 19, 2020 at 4:03 PM Andrey Davydov 
>> wrote:
>>
>> Seem that refactor h2serilizer method in following manner will be safe
>> for marshallers which not depends on ignite instance and will be faster
>> anyway, due to single clsLdr resolving. For binary marshaller solution is
>> still unsafe =(((
>>
>>
>>
>> private JavaObjectSerializer h2Serializer() {
>>
>> ClassLoader clsLdr = ctx != null ?
>> U.resolveClassLoader(ctx.config()) : n

Re: HOW TO CHANGE IGNITE JAVA THIN CLIENT PASSWORD

2020-03-23 Thread Ilya Kasnacheev
Hello!

I have just rechecked this case:
1) start persistent node.
2) try changing password - failure (cluster not active)
3) activate cluster (control.sh --activate, ignite/ignite)
4) try changing password - success.
5) check with sqlline - new password (test) accepted.
6) try connecting with thin client and ignite/ignite again - failure.

Are you sure you're not losing your cluster together with all data between
runs?

igniteClient.query(new SqlFieldsQuery("ALTER USER \"ignite\" WITH
PASSWORD 'test'")).getAll();


Regards,
-- 
Ilya Kasnacheev


ср, 18 мар. 2020 г. в 11:47, DS :

>
>
> igniteClient.query(new SqlFieldsQuery(" ALTER USER 'ignite' WITH PASSWORD
> 'password' "));
>
> igniteClient.query(new SqlFieldsQuery(" ALTER USER "ignite'' WITH PASSWORD
> 'password' "));
>
> 1) Both give the same result. i.e query runs without throwing any
> error/exception.
>
> 2) Unable to connect with the new password.
>
> 3) Again tried with 'ignite' as password, it connects back.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Re: Unsafe usage of org.h2.util.JdbcUtils in Ignite

2020-03-19 Thread Ilya Kasnacheev
Hello!

I suggest raising these issues on developer list and/or filing tickets
against IGNITE.

Regards,
-- 
Ilya Kasnacheev


чт, 19 мар. 2020 г. в 15:43, Andrey Davydov :

> I have some RnD with Apache Felix this week to found workaround for 
> multi-tenancy
> of H2.
>
>
>
> But there is problem with some Ignites in same JVM.
>
>
>
> As I see in org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing 
> latest
> started Ignite will visible via JdbcUtils.serializer, and it can be
> already closed and it workdir can be deleted.
>
>
>
> Line 2105:
>
>
>
>
>
> if (JdbcUtils.serializer != null)
>
> U.warn(log, "Custom H2 serialization is already configured,
> will override.");
>
>
>
> JdbcUtils.serializer = h2Serializer();
>
>
>
> Line 2268:
>
>
>
> private JavaObjectSerializer h2Serializer() {
>
> return new JavaObjectSerializer() {  *//nested class has link to
> parent IgniteH2Indexing and to ingnite instance transitively*
>
> @Override public byte[] serialize(Object obj) throws Exception
> {
>
> return U.marshal(marshaller, obj); *//In common case,
> binary marshaller logic depends on work dir*
>
> }
>
>
>
> @Override public Object deserialize(byte[] bytes) throws
> Exception {
>
> ClassLoader clsLdr = *ctx* != null ? U.resolveClassLoader(
> *ctx.config()*) : null; *//only configuration need, but all ctx leaked*
>
>
>
> return U.unmarshal(marshaller, bytes, clsLdr);
>
> }
>
> };
>
> }
>
>
>
>
>
> Andrey.
>
>
>
> *От: *Ilya Kasnacheev 
> *Отправлено: *19 марта 2020 г. в 14:37
> *Кому: *user@ignite.apache.org
> *Тема: *Re: Unsafe usage of org.h2.util.JdbcUtils in Ignite
>
>
>
> Hello!
>
>
>
> As far as my understanding goes:
>
>
>
> 1) It is H2's decision to exhibit JdbcUtil.serializer as their public API,
> they have a public system property to override it:
>
>
>
>
>
>
>
>
>
>
> */** * System property h2.javaObjectSerializer * (default: 
> null). * The JavaObjectSerializer class name for java objects being 
> stored in * column of type OTHER. It must be the same on client and server to 
> work * correctly. */**public static final *String *JAVA_OBJECT_SERIALIZER *=
> Utils.*getProperty*(*"h2.javaObjectSerializer"*, *null*);
>
>
>
> Obviously, this is not designed for multi-tenancy of H2 in mind.
>
>
>
> If you really need multi-tenancy, I recommend starting H2 in a separate
> class loader inherited from root class loader and isolated from any Ignite
> classes.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 18 мар. 2020 г. в 18:54, Andrey Davydov :
>
> Hello,
>
>
>
> org.h2.util.JdbcUtils is utility class with all static methods  and
> configured via System.properties. So it system wide resource. It is
> incorrect inject Ignite specific settings in it.
>
>
>
> this - value: org.apache.ignite.internal.IgniteKernal #1
>
> <- grid - class: org.apache.ignite.internal.GridKernalContextImpl,
> value: org.apache.ignite.internal.IgniteKernal #1
>
>   <- ctx - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing, value:
> org.apache.ignite.internal.GridKernalContextImpl #2
>
><- this$0 - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing #2
>
> <- serializer - class: org.h2.util.JdbcUtils, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10 #1
>
>  <- [5395] - class: java.lang.Object[], value:
> org.h2.util.JdbcUtils class JdbcUtils
>
>   <- elementData - class: java.util.Vector, value:
> java.lang.Object[] #37309
>
><- classes - class: sun.misc.Launcher$AppClassLoader, value:
> java.util.Vector #31
>
> <- contextClassLoader (thread object) - class:
> java.lang.Thread, value: sun.misc.Launcher$AppClassLoader #1
>
>
>
>1. It cause problems, if it need to work with H2 databases from same
>JVM where ignite run.
>2. It cause problems, when some Ignites run in same JVM
>3. It makes closed IgniteKernal reachable from GC root.
>
>
>
> I think it is bad architecture solution to use this class and use H2
> related system properties at all.
>
>
>
> Andrey.
>
>
>
>
>


Re: ClusterTopologyServerNotFoundException

2020-03-19 Thread Ilya Kasnacheev
Hello!

I no longer understand your architecture. First you say you want a
replicated highly-available cache, then you say you only have one server
node. Can you please elaborate?

Regards.
-- 
Ilya Kasnacheev


вт, 17 мар. 2020 г. в 10:45, prudhvibiruda :

> Hi,
> We didn't explicitly define baseline topology. Because we don't want any of
> our nodes (all our nodes should be server nodes) waiting for other nodes.
> So at present with in our cluster we have only one server node.
> So you are saying that we should customize this baseline topology? But why
> is this error coming when we have only one node in our cluster?
>
> Thanks for the quick reply,
> Prudhvi
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite memory leaks in 2.8.0

2020-03-19 Thread Ilya Kasnacheev
Hello!

Our test suites start tens of thousands nodes during every suite run.

If there would be any leaks in start-stop scenario, we would surely notice
this. I recommend checking why this is a problem in your scenario.

The problem you have mentioned may cause problems with class de-loading,
however. Do you bring a new class loader for each test?

Can you file an issue about this so that we code a proper de-allocation?

Regards,
-- 
Ilya Kasnacheev


ср, 18 мар. 2020 г. в 18:37, Andrey Davydov :

> Hello,
>
>
>
> There are at least two way link to IgniteKernal leaks to GC root and makes
> it unavailable for GC.
>
>
>
>1. The first one:
>
>
>
> this - value: org.apache.ignite.internal.IgniteKernal #1
>
> <- grid - class: org.apache.ignite.internal.GridKernalContextImpl,
> value: org.apache.ignite.internal.IgniteKernal #1
>
>   <- ctx - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing, value:
> org.apache.ignite.internal.GridKernalContextImpl #2
>
><- this$0 - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing #2
>
> <- serializer - class: org.h2.util.JdbcUtils, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10 #1
>
>  <- [5395] - class: java.lang.Object[], value:
> org.h2.util.JdbcUtils class JdbcUtils
>
>   <- elementData - class: java.util.Vector, value:
> java.lang.Object[] #37309
>
><- classes - class: sun.misc.Launcher$AppClassLoader, value:
> java.util.Vector #31
>
> <- contextClassLoader (thread object) - class:
> java.lang.Thread, value: sun.misc.Launcher$AppClassLoader #1
>
>
>
> org.h2.util.JdbcUtils has static field JavaObjectSerializer serializer, which
> see IgniteKernal via IgniteH2Indexing. It make closed and stopped
> IgniteKernal non collectable by GC.
>
> If some Ignites run in same JVM, JdbcUtils will always use only one, and
> it can cause some races.
>
>
>
>1. The second way:
>
>
>
> this - value: org.apache.ignite.internal.IgniteKernal #2
>
> <- grid - class: org.apache.ignite.internal.GridKernalContextImpl,
> value: org.apache.ignite.internal.IgniteKernal #2
>
>   <- ctx - class:
> org.apache.ignite.internal.processors.cache.GridCacheContext, value:
> org.apache.ignite.internal.GridKernalContextImpl #1
>
><- cctx - class:
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry,
> value: org.apache.ignite.internal.processors.cache.GridCacheContext #24
>
> <- parent - class:
> org.apache.ignite.internal.processors.cache.GridCacheMvccCandidate, value:
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry
> #4
>
>  <- [0] - class: java.lang.Object[], value:
> org.apache.ignite.internal.processors.cache.GridCacheMvccCandidate #1
>
>   <- elements - class: java.util.ArrayDeque, value:
> java.lang.Object[] #43259
>
><- value - class: java.lang.ThreadLocal$ThreadLocalMap$Entry,
> value: java.util.ArrayDeque #816
>
> <- [119] - class:
> java.lang.ThreadLocal$ThreadLocalMap$Entry[], value:
> java.lang.ThreadLocal$ThreadLocalMap$Entry #51
>
>  <- table - class: java.lang.ThreadLocal$ThreadLocalMap,
> value: java.lang.ThreadLocal$ThreadLocalMap$Entry[] #21
>
>   <- threadLocals (thread object) - class: java.lang.Thread,
> value: java.lang.ThreadLocal$ThreadLocalMap #2
>
>
>
> Link to IgniteKernal leaks to ThreadLocal variable, so when we start/stop
> many instances of Ignite in same jvm during testing, we got many stopped
> “zomby” ignites on ThreadLocal context of main test thread and it cause
> OutOfMemory after some dozens of tests.
>
>
>
> Andrey.
>
>
>


Re: Unsafe usage of org.h2.util.JdbcUtils in Ignite

2020-03-19 Thread Ilya Kasnacheev
Hello!

As far as my understanding goes:

1) It is H2's decision to exhibit JdbcUtil.serializer as their public API,
they have a public system property to override it:

/**
 * System property h2.javaObjectSerializer
 * (default: null).
 * The JavaObjectSerializer class name for java objects being stored in
 * column of type OTHER. It must be the same on client and server to work
 * correctly.
 */
public static final String JAVA_OBJECT_SERIALIZER =
Utils.getProperty("h2.javaObjectSerializer", null);


Obviously, this is not designed for multi-tenancy of H2 in mind.

If you really need multi-tenancy, I recommend starting H2 in a separate
class loader inherited from root class loader and isolated from any Ignite
classes.

Regards,
-- 
Ilya Kasnacheev


ср, 18 мар. 2020 г. в 18:54, Andrey Davydov :

> Hello,
>
>
> org.h2.util.JdbcUtils is utility class with all static methods  and
> configured via System.properties. So it system wide resource. It is
> incorrect inject Ignite specific settings in it.
>
>
>
> this - value: org.apache.ignite.internal.IgniteKernal #1
>
> <- grid - class: org.apache.ignite.internal.GridKernalContextImpl,
> value: org.apache.ignite.internal.IgniteKernal #1
>
>   <- ctx - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing, value:
> org.apache.ignite.internal.GridKernalContextImpl #2
>
><- this$0 - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing #2
>
> <- serializer - class: org.h2.util.JdbcUtils, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10 #1
>
>  <- [5395] - class: java.lang.Object[], value:
> org.h2.util.JdbcUtils class JdbcUtils
>
>   <- elementData - class: java.util.Vector, value:
> java.lang.Object[] #37309
>
><- classes - class: sun.misc.Launcher$AppClassLoader, value:
> java.util.Vector #31
>
> <- contextClassLoader (thread object) - class:
> java.lang.Thread, value: sun.misc.Launcher$AppClassLoader #1
>
>
>
>1. It cause problems, if it need to work with H2 databases from same
>JVM where ignite run.
>2. It cause problems, when some Ignites run in same JVM
>3. It makes closed IgniteKernal reachable from GC root.
>
>
>
> I think it is bad architecture solution to use this class and use H2
> related system properties at all.
>
>
>
> Andrey.
>
>
>


Re: Failed to bind to any [host:port] from the range portFrom=10900 , portTo=11000

2020-03-19 Thread Ilya Kasnacheev
Hello!

Is it possible that you have specified incorrect address in your
ClientConnectorConfiguration? If host is not local then it wouldn't be able
to bind to any ports.

Regards,
-- 
Ilya Kasnacheev


ср, 18 мар. 2020 г. в 15:23, AravindJP :

> Hi , I have setup a Ignite cluster in google cloud following document :
> https://apacheignite.readme.io/docs/google-cloud-deployment. My Ignite
> client code which connects to 10800 works fine . ie //THIS WORKS !!
> ClientConfiguration ccfg = new
> ClientConfiguration().setAddresses("xx.xx.xx.xx:10800"); IgniteClient
> igniteClient = Ignition.startClient(ccfg); ClientCache
> clientcache = igniteClient.getOrCreateCache(CACHE_NAME);
> clientcache.put(key,address); clientcache.get(key) But when i tried to
> connect as a client node , it doesn't work at all ! Can someone tell me
> what could be the reason ? //tried 10800 and 10900
> ClientConnectorConfiguration cfg=new ClientConnectorConfiguration()
> .setHost("xx.xx.xx.xx") .setPort(10900); IgniteConfiguration ignitecfg =
> new IgniteConfiguration(); ignitecfg.setClientConnectorConfiguration(cfg);
> IgniteDataStreamer
> dataSteamer=ignite.dataStreamer("put-get-example"); Address val = new
> Address("sample address", 94612); Integer key = 1; dataSteamer.addData(key,
> val); fails to run with below exception
> org.apache.ignite.IgniteCheckedException: Failed to start processor:
> GridProcessorAdapter [] at
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1981)
> ~[ignite-core-2.8.0.jar:2.8.0] at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1213)
> ~[ignite-core-2.8.0.jar:2.8.0] at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
> [ignite-core-2.8.0.jar:2.8.0] at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703)
> [ignite-core-2.8.0.jar:2.8.0] at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117)
> [ignite-core-2.8.0.jar:2.8.0] at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637)
> [ignite-core-2.8.0.jar:2.8.0] at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:563)
> [ignite-core-2.8.0.jar:2.8.0] at
> org.apache.ignite.Ignition.start(Ignition.java:321)
> [ignite-core-2.8.0.jar:2.8.0] at
> com.mainad.userlistignitesetup.UserListIgniteSetupApplication.main(UserListIgniteSetupApplication.java:50)
> [classes/:na] Caused by: org.apache.ignite.IgniteCheckedException: Failed
> to start client connector processor. at
> org.apache.ignite.internal.processors.odbc.ClientListenerProcessor.start(ClientListenerProcessor.java:209)
> ~[ignite-core-2.8.0.jar:2.8.0] at
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1978)
> ~[ignite-core-2.8.0.jar:2.8.0] ... 8 common frames omitted Caused by:
> org.apache.ignite.IgniteCheckedException: Failed to bind to any [host:port]
> from the range [host=xx.xx.xx.xx, portFrom=10900, portTo=11000,
> lastErr=class org.apache.ignite.IgniteCheckedException: Failed to
> initialize NIO selector.] at
> org.apache.ignite.internal.processors.odbc.ClientListenerProcessor.start(ClientListenerProcessor.java:197)
> ~[ignite-core-2.8.0.jar:2.8.0] ... 9 common frames omitted
> --
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>


Re: HOW TO CHANGE IGNITE JAVA THIN CLIENT PASSWORD

2020-03-17 Thread Ilya Kasnacheev
Hello!

ALTER USER "ignite" WITH PASSWORD 'new password';

Yep!

Regards,
-- 
Ilya Kasnacheev


вт, 17 мар. 2020 г. в 15:00, dbutkovic :

> try with ALTER USER 'ignite' WITH PASSWORD 'test'
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to terminate long running transactions in 2.4.0?

2020-03-17 Thread Ilya Kasnacheev
Hello!

VM-level deadlock is a deadlock on synchronized blocks in Java.

It's hard to say what happens in your case. Do you have a reproducer for
this behavior?

Regards,
-- 
Ilya Kasnacheev


пн, 16 мар. 2020 г. в 20:49, rc :

> Hi Ilya,
>
> Thanks for responding. Killing the originator nodes did not terminate the
> transactions. I would like understand more about the VM-level deadlock. How
> does one go about determining that?
>
> Thanks,
> rc
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: java.lang.IllegalMonitorStateException: attempt to unlock read lock, not locked by current thread

2020-03-16 Thread Ilya Kasnacheev
Hello!

Are you sure that you are not using the same connection from two threads
concurrently? Ignite thin connections are non-thread-safe.

Regards,
-- 
Ilya Kasnacheev


чт, 12 мар. 2020 г. в 11:47, yangjiajun <1371549...@qq.com>:

> Hello.
>
> I get a strange exception while test ignite 2.8 in our test env.I get
> following exception when I set lazy = true to run our tasks.The tasks do
> jdbc selects and insert the result to others ignite tables.The tasks are
> async.The exception throws when my code fetches 1024 rows.Such tasks works
> well in ignite 2.7.And it also works well when I set lazy = false in ignite
> 2.8.
>
> This is the exception in client:
> java.sql.SQLException: General error:
> "java.lang.IllegalMonitorStateException: attempt to unlock read lock, not
> locked by current thread" [5-197]
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:901)
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinResultSet.next(JdbcThinResultSet.java:206)
> at
> com.zaxxer.hikari.pool.HikariProxyResultSet.next(HikariProxyResultSet.java)
> .
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
> The log in server:
> ignite-369ae417.rar
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2059/ignite-369ae417.rar>
>
>
> Unfortunately,I still can't make a simple reproducer in my locl env.Do u
> have any ideas about such exception?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Warning in the logs - Writes are very slow

2020-03-16 Thread Ilya Kasnacheev
Hello!

Let me add to the previous answers:

This message means that checkpoint page buffer is almost exhausted. It is
recommended to increase its size (dataRegionCfg.checkpointPageBufferSize)
if you have a lot of writes.

Please see
https://apacheignite.readme.io/docs/durable-memory-tuning#section-checkpointing-buffer-size

Regards,
-- 
Ilya Kasnacheev


пт, 13 мар. 2020 г. в 09:39, krkumar24061...@gmail.com <
krkumar24061...@gmail.com>:

> Hi All - I see this message in the logs
>
> [2020-03-12 15:23:29,896][INFO
> ][comcastprod-1-StoreFlushWorker-7][PageMemoryImpl] Throttling is applied
> to
> page modifications [*percentOfPartTime=0.54*, markDirty=8185 pages/sec,
> checkpointWrite=14893 pages/sec, estIdealMarkDirty=8182 pages/sec,
> curDirty=0.35, maxDirty=0.28, avgParkTime=1654529 ns, pages:
> (total=5700803,
> evicted=13463, written=3967472, synced=0, cpBufUsed=307302,
> cpBufTotal=518215)]
>
> From the docs, i understand that this thread is being parked for almost 54%
> time as part of throttling. Wanted to know how to solve this issue and what
> are thing that can be done avoid this ??
>
> Thanx and Regards,
> KR Kumar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ClusterTopologyServerNotFoundException

2020-03-16 Thread Ilya Kasnacheev
Hello!

Do you have baseline topology? What does it contain?

I'm pretty sure Ignite will define baseline topology for persistent
cluster, and you will have to adjust it to match your actual nodes.

Regards,
-- 
Ilya Kasnacheev


пн, 16 мар. 2020 г. в 09:27, prudhvibiruda :

> Hi ,
> Sorry for the late reply.
> Please find these attached screenshots of my ignite configuration and also
> the error we are getting.
> We are using spring boot in our project.
> Our requirement is that :
> 1.) We need ignite to store data on disk.
> 2.)When node is down then the other node should continue doing the
> persistence on to the disk.
> Please help on what are we missing ? Why are we getting that exception ?
> What are your suggestions?
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2790/Capture1.png>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2790/Capture2.png>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2790/Capture3.png>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: 2.8.0 : JDBC Thin Client : Unable to load the tables via DBeaver

2020-03-12 Thread Ilya Kasnacheev
Hello!

I can see a lot of activity from your side, and ask you to participate in
debugging and development.

Please file tickets about the issues that you encounter and propose fixes.
I think you have as much expertise around writing security plugin as
anybody else.

Regards,
-- 
Ilya Kasnacheev


чт, 12 мар. 2020 г. в 16:19, VeenaMithare :

> This fails when the security plugin is enabled. When I remove the security
> plugin, the query goes through.
>
> With the security plugin enabled, it hangs at :
> GridReduceQueryExecutor - on awaitallReplies method. Attached screenshot of
> thread , when it hangs.
> Steps I take :
> 1. Start my server with 2.8.0 and the security plugin enabled.
>
> Also the logs are :
>
>
> 
>
> Server 1 :
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: New next node [newNext=TcpDiscoveryNode
> [id=c003c5d9-08c9-404e-91c8-0a2b3ddbbb5a,
> consistentId=0:0:0:0:0:0:0:1,x.x.x.y,127.0.0.1:47501, addrs=ArrayList
> [0:0:0:0:0:0:0:1, x.x.x.y, 127.0.0.1], sockAddrs=HashSet
> [/0:0:0:0:0:0:0:1:47501, /127.0.0.1:47501,
> machinename.companyname.LOCAL/x.x.x.y:47501], discPort=47501, order=0,
> intOrder=2, lastExchangeTime=1584018173124, loc=false,
> ver=2.8.0#20200226-sha1:341b01df, isClient=false]]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: TCP discovery accepted incoming connection [rmtAddr=/0:0:0:0:0:0:0:1,
> rmtPort=63143]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: TCP discovery spawning a new thread for connection
> [rmtAddr=/0:0:0:0:0:0:0:1, rmtPort=63143]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Started serving remote node connection
> [rmtAddr=/0:0:0:0:0:0:0:1:63143, rmtPort=63143]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Initialized connection with remote server node
> [nodeId=c003c5d9-08c9-404e-91c8-0a2b3ddbbb5a,
> rmtAddr=/0:0:0:0:0:0:0:1:63143]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Received activate request with BaselineTopology[id=0]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Started state transition: true
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Received state change finish message: true
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Added new node to topology: TcpDiscoveryNode
> [id=c003c5d9-08c9-404e-91c8-0a2b3ddbbb5a,
> consistentId=0:0:0:0:0:0:0:1,x.x.x.y,127.0.0.1:47501, addrs=ArrayList
> [0:0:0:0:0:0:0:1, x.x.x.y, 127.0.0.1], sockAddrs=HashSet
> [/0:0:0:0:0:0:0:1:47501, /127.0.0.1:47501,
> machinename.companyname.LOCAL/x.x.x.y:47501], discPort=47501, order=2,
> intOrder=2, lastExchangeTime=1584018173124, loc=false,
> ver=2.8.0#20200226-sha1:341b01df, isClient=false]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Topology snapshot [ver=2, locNode=96b07658, servers=2, clients=0,
> state=ACTIVE, CPUs=32, offheap=13.0GB, heap=14.0GB]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO:   ^-- Baseline [id=0, size=2, online=2, offline=0]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Started exchange init [topVer=AffinityTopologyVersion [topVer=2,
> minorTopVer=0], crd=true, evt=NODE_JOINED,
> evtNode=c003c5d9-08c9-404e-91c8-0a2b3ddbbb5a, customEvt=null,
> allowMerge=true, exchangeFreeSwitch=false]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Finished waiting for partition release future
> [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], waitTime=0ms,
> futInfo=NA, mode=DISTRIBUTED]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Finished waiting for partitions release latch: ServerLatch
> [permits=0,
> pendingAcks=HashSet [], super=CompletableLatch [id=CompletableLatchUid
> [id=exchange, topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Finished waiting for partition release future
> [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], waitTime=0ms,
> futInfo=NA, mode=LOCAL]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Finished exchange init [topVer=AffinityTopologyVersion [topVer=2,
> minorTopVer=0], crd=true]
> Mar 12, 2020 1:02:53 PM org.apache.ignite.logger.java.JavaLogger info
> INFO: Accepted incoming communication connection [locAddr=/127.0.0.1:47100
> ,
> rmtAddr=/127.0.0.1:63144]
> Mar 12, 202

Re: cache/table metadata

2020-03-12 Thread Ilya Kasnacheev
Hello!

I don't think there is any utility, but you can use
ignite.binary().type(string) to clarify field types.

Regards,
-- 
Ilya Kasnacheev


чт, 12 мар. 2020 г. в 18:05, narges saleh :

> Thanks Ilya.
>
>  Is there any utility that returns the (java) field types for the query
> entity defined caches? I don't want to have to map jdbc types to java types.
>
> On Thu, Mar 12, 2020 at 6:41 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> You can use JDBC's DatabaseMetaData for that.
>>
>> Please see
>> https://docs.oracle.com/javase/8/docs/api/java/sql/DatabaseMetaData.html
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 12 мар. 2020 г. в 05:49, narges saleh :
>>
>>> Hi All,
>>>
>>> How would one extract, programmatically, the metadata for the
>>> fields/columns for a cache/table? I want to get the field names along with
>>> the data type for each field, say, for example, if the table is defined via
>>> queryentity or SQL/JDBC.
>>>
>>> thanks.
>>>
>>


Re: cache/table metadata

2020-03-12 Thread Ilya Kasnacheev
Hello!

You can use JDBC's DatabaseMetaData for that.

Please see
https://docs.oracle.com/javase/8/docs/api/java/sql/DatabaseMetaData.html

Regards,
-- 
Ilya Kasnacheev


чт, 12 мар. 2020 г. в 05:49, narges saleh :

> Hi All,
>
> How would one extract, programmatically, the metadata for the
> fields/columns for a cache/table? I want to get the field names along with
> the data type for each field, say, for example, if the table is defined via
> queryentity or SQL/JDBC.
>
> thanks.
>


Re: Issue in Distributed joins

2020-03-12 Thread Ilya Kasnacheev
Hello!

Apache Ignite has known limitations on the use of distributed SQL, namely
that there is just one reduce phase.

I'm not actually sure whether you have hit this exact one using this query.
Let's wait on feedback from SQL people (who are probably busy with their
Calcite affair), but of course you are free to file an IGNITE ticket.

Regards,
-- 
Ilya Kasnacheev


ср, 11 мар. 2020 г. в 19:39, DS :

> Hello again!
>
> Having one table as replicated and other as partitioned works already.
> Can you please file a defect for why nested joins are not working in the
> distributed mode as claimed.
>
> Regards
> Deepika Singh
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: 2.8.0 : JDBC Thin Client : Unable to load the tables via DBeaver

2020-03-11 Thread Ilya Kasnacheev
Hello!

I have just tried it with DBeaver 4.2.2 and it runs OK.

Consider also adding ignite indexing jar to class path.

Regards,
-- 
Ilya Kasnacheev


ср, 11 мар. 2020 г. в 18:39, VeenaMithare :

> Hi ,
>
> Yes, I have put 2.8.0 ignite core jar in the path for the DBeaver to pickup
> the latest jdbc jars.
>
> Steps to reproduce :
> 1. Create a table on dbeaver :  CREATE TABLE TEST (
> USERNAME VARCHAR,
> APPLICATIONNAME VARCHAR,
> MACHINENAME VARCHAR,
> PRIMARY KEY ( USERNAME)
> )
>
>
> 2. Try and do SELECT * FROM PUBLIC.TEST
>
> The query runs for ever .
>
> regards,
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite 2.7.6: Memory Leak with Direct buffers in TCPCommunication SPI

2020-03-11 Thread Ilya Kasnacheev
Hello!

I can see that you have 19G of Object[] in heap. I'm not sure if it has any
relation to direct buffers.

Can you debug where these Object[]'s are referenced from?

As a side note, our test suites bring tens of thousands of nodes up and
down in one JVM and not face this issue.

Regards,
-- 
Ilya Kasnacheev


вс, 1 мар. 2020 г. в 08:12, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:

> Please see the attached jhist.
> In this condition one of the node consumed about 18 GB.
>


Re: cache.containsKey returns false

2020-03-11 Thread Ilya Kasnacheev
Hello!

As far as my understanding goes, this is how readThrough work. It may
update only subset of REPLICATED cache partitions across cluster.

Regards,
-- 
Ilya Kasnacheev


пт, 28 февр. 2020 г. в 21:09, Prasad Bhalerao :

> Hi,
>
> We have not set any expiration/ eviction policy.
> We were getting false for key "key1", so we checked the key/value by
> querying cache using web console and the value is present in cache. But
> still getting false for key "key1" on subsequent containskey execution.
>
> But for key "key2" we were getting true.
>
> Thanks,
> Prasad
>
> On Fri 28 Feb, 2020, 10:58 PM Denis Magda 
>> Hi Akash,
>>
>> Do you execute the cache.contains() method after reading-through the
>> record with cache.get()? Do you have any expiration/eviction policies set
>> that may purge the record from memory after being loaded from disk?
>>
>> -
>> Denis
>>
>>
>> On Fri, Feb 28, 2020 at 9:11 AM Akash Shinde 
>> wrote:
>>
>>> Hi,
>>> I am using Ignite 2.6 version.
>>>
>>> I have partitioned cache, read-through and write-through is enabled.
>>> Back-up count is 1 and total number of server nodes in cluster are  3.
>>>
>>> When I try to get the data from a cache for a key using cache.get(key)
>>> method, ignite reads the value from database using provided cache loader
>>> and returns the value by read-through approach.
>>>
>>> But when I execute cache().containsKey(key) on client node, I get false.
>>>
>>> But the strange this is this behavior is not same for all keys of the
>>> same cache.
>>> For key1 I get false but for key2 I get true. But both the keys are
>>> present in cache.
>>>
>>> I executed the SQL on every node (one node at a time) using web-console,
>>> I got data present only on one node out of three. This seems to be primary
>>> for this particular key.
>>>
>>> Can someone please advise why this is happening? Is it a bug in ignite?
>>> This seems to be a very basic case.
>>>
>>>
>>>
>>> Thanks,
>>> Akash
>>>
>>


Re: Ignite Transaction vs Lock

2020-03-11 Thread Ilya Kasnacheev
Hello!

I'm not sure if that is what you want, have you tried Transaction Isolation
Level of SERIALIZABLE?

Regards,
-- 
Ilya Kasnacheev


пн, 2 мар. 2020 г. в 10:43, Rout, Biswajeet :

> Hi,
>
> I used transactions in my project when I interact to cache. The
> transaction isolation and concurrency modes combination I tried are not
> synchronous.
>
> I want other threads to wait until the current running thread is done with
> the same cache.
>
> For better understanding, I am attaching two sample codes with the
> transaction and using Lock.
>
> I want the result of how it gives using Lock in the transaction. Basically
> I want to avoid explicit locking.
> And below here are the two results snapshot of these codes.
>
> *Results using Lock:*
>
> [image: image.png]
> *Results using Transaction:*
>
> [image: image.png]
>
>
> Regards,
>
>
> Biswajeet *Rout*
>
>
> Dtix-DELPHI
>
>
> O +1 813 617 7739
> M 970 334 9977
>
> <http://www.facebook.com/verizon>   <http://twitter.com/verizon>
> <http://www.linkedin.com/company/verizon>
> <http://www.instagram.com/verizon>
>
>


Re: Issue in Distributed joins

2020-03-11 Thread Ilya Kasnacheev
Hello!

Bad news is that I can't say what went wrong here. Maybe people more
familiar with our SQL engine will chime in.

Good news is, I have a work-around. It is obvious in your case that
blood_group_info is a reference table (i.e. a small constant-size table
instead of data table). I recommend switching its cache mode to REPLICATED
instead of PARTITIONED and it should join correctly. Can you try that?

Regards,
-- 
Ilya Kasnacheev


ср, 11 мар. 2020 г. в 13:05, DS :

> Hello Ilya,
> Please find the script in the attachment.
> The file also contains the query to run in comments.
>
> Also,
> *we have enabled non-collocated joins check. *
>
> forDistributedJoins.sql
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2210/forDistributedJoins.sql>
>
>
> Regards
> Deepika Singh
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: 2.8.0 : JDBC Thin Client : Unable to load the tables via DBeaver

2020-03-11 Thread Ilya Kasnacheev
Hello!

Do you have any steps to reproduce? Are you sure you are using latest
version of JDBC driver with DBeaver?

Regards,
-- 
Ilya Kasnacheev


ср, 11 мар. 2020 г. в 14:26, VeenaMithare :

> Hi Team,
>
> First of all, thank you for the SYS schema that is now available to view
> via
> Dbeaver. Looks interesting.
>
> The issue I am facing is that I am unable to query any of my tables in the
> public schema when I switch over to 2.8.0. These tables currently do not
> have any data in it. I get this message in the server logs :
> 2020-03-11 11:17:27,070 [grid-timeout-worker-#71] WARN
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener [] -
> Unable to perform handshake within timeout [timeout=6,
> remoteAddr=/127.0.0.1:49640]
>
> Please note the default clientconfiguration handshaketimeout is 10 seconds.
> I tried to increase it to 60 seconds, but it has not helped.
>
> Can someone guide me if I need to do any configurations etc. to get past
> this or if this is a bug ?
>
> regards,
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite 2.8 get exception while use batch insert in streaming mode

2020-03-10 Thread Ilya Kasnacheev
Hello!

Unfortunately this looks like a regression in 2.8 :(

I have filed an issue: https://issues.apache.org/jira/browse/IGNITE-12764

Regards,
-- 
Ilya Kasnacheev


чт, 5 мар. 2020 г. в 10:45, yangjiajun <1371549...@qq.com>:

> Hello.
>
> The following test code also throws the same exception:
> ps = conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE ON");
>  ps.execute();
>  ps.close();
>
> String sql = "insert INTO  city1(id,name,name1)
> VALUES(?,?,RANDOM_UUID())";
> ps = conn.prepareStatement(sql);
> for (int i = 0; i < 1600; i++) {
> String s1 = String.valueOf(Math.random());
> ps.setInt(1, i);
> ps.setString(2, s1);
> ps.execute();
> }
>
> ps.close();
>
> ps = conn.prepareStatement("set streaming off");
> ps.execute();
> ps.close();
>
> conn.close();
>
> We can't use execute batch method and RANDOM_UUID() within streaming mode
> in
> ignite 2.8?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: tcp-comm system-critical thread blocked

2020-03-10 Thread Ilya Kasnacheev
Hello!

Please make sure to read
https://apacheignite.readme.io/docs/critical-failures-handling about
configurability, etc.

Unfortunately there is too few context to say mor.e

Regards,
-- 
Ilya Kasnacheev


пн, 9 мар. 2020 г. в 18:33, Mitchell Rathbun (BLOOMBERG/ 731 LEX) <
mrathb...@bloomberg.net>:

> We have seen the following happen a couple of time recently during periods
> of high load/gc pauses in our system:
>
> 2020-03-02 11:38:56,803 ERROR STDIO
> [tcp-disco-msg-worker-#2%ignite_wingman_2931%] {} Mar 02, 2020 11:38:56 AM
> org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Blocked system-critical thread has been detected. This can lead to
> cluster-wide undefined behaviour [threadName=grid-nio-worker-tcp-comm-3,
> blockedFor=12s]
>
> I know it says this leads to undefined behavior, but I am wondering what
> this thread is for/what the effect of it being blocked is. Also, is this
> timeout something that is configurable?
>


Re: Unable to connect to Ignite Visor Console in Ignite 2.8.0

2020-03-10 Thread Ilya Kasnacheev
Hello!

Actually, this screen shot does actually contain relevant error message:
rebalance thread pool size should be < system thread pool size.

It is enforced and node will not start (even visor's Daemon node).

Please fix your cfg to enforce this constraint:
https://apacheignite.readme.io/docs/thread-pools

Regards,
-- 
Ilya Kasnacheev


пт, 6 мар. 2020 г. в 17:37, Николай Кулагин :

> Joshi,
>
> A ticket has been started for this issue. You can track his status here
> [1].
>
> [1] https://issues.apache.org/jira/browse/IGNITE-12757
>
> чт, 5 мар. 2020 г. в 14:48, Kamlesh Joshi :
>
>> Hi Team,
>>
>>
>>
>> I have updated Ignite cluster to latest version 2.8.0 and update was
>> successful. However, I am unable connect to IgniteVisor console. Below is
>> the command I used (the same command was used for earlier versions and it
>> worked fine). Not sure am I missing something or is there any defect around
>> it? Ignite 2.8.0 docs are not available on the site !
>>
>>
>>
>> *$IGNITE_HOME/bin/ignitevisorcmd.sh -cfg="/app/Ignite/visorconfig.xml"*
>>
>>
>>
>>
>>
>>
>>
>> *Thanks and Regards,*
>>
>> *Kamlesh Joshi*
>>
>>
>>
>>
>> "*Confidentiality Warning*: This message and any attachments are
>> intended only for the use of the intended recipient(s), are confidential
>> and may be privileged. If you are not the intended recipient, you are
>> hereby notified that any review, re-transmission, conversion to hard copy,
>> copying, circulation or other use of this message and any attachments is
>> strictly prohibited. If you are not the intended recipient, please notify
>> the sender immediately by return email and delete this message and any
>> attachments from your system.
>>
>> *Virus Warning:* Although the company has taken reasonable precautions
>> to ensure no viruses are present in this email. The company cannot accept
>> responsibility for any loss or damage arising from the use of this email or
>> attachment."
>>
>


Re: page is broken,cannot restore it from wal (2.8.0)

2020-03-10 Thread Ilya Kasnacheev
Hello!

Maybe PDS is corrupted, if you have sufficient backups, I recommend
deleting persistence data from this node and re-add it to cluster.

Regards,
-- 
Ilya Kasnacheev


вс, 8 мар. 2020 г. в 04:20, 18624049226 <18624049...@163.com>:

> hi community,
>
> What are the reasons for the following problems? How to avoid it?
>
>


Re: Issue in Distributed joins

2020-03-10 Thread Ilya Kasnacheev
Hello again!

You can also frame it in the form of SQL script (DDL - INSERTs - SELECTs)
if you wish.

Regards,
-- 
Ilya Kasnacheev


вт, 10 мар. 2020 г. в 14:30, Ilya Kasnacheev :

> Hello!
>
> Can you please produce a small reproducer project which will, when
> started, bring up a node, populate it with some data and then run those
> queries?
>
> We will surely check.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 9 мар. 2020 г. в 09:01, DS :
>
>> Hello,
>> I'll appreciate, if you can find time to look into the issue.
>>
>> Regards
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Issue in Distributed joins

2020-03-10 Thread Ilya Kasnacheev
Hello!

Can you please produce a small reproducer project which will, when started,
bring up a node, populate it with some data and then run those queries?

We will surely check.

Regards,
-- 
Ilya Kasnacheev


пн, 9 мар. 2020 г. в 09:01, DS :

> Hello,
> I'll appreciate, if you can find time to look into the issue.
>
> Regards
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Spring JDBCTemplate with Ignite's DataStreamer

2020-03-06 Thread Ilya Kasnacheev
Hello!

What did you try? Any errors so far?

I'm not sure that it will work properly, if Spring tries execute any
queries of its own, and you must make sure to terminate it normally.

Regards,
-- 
Ilya Kasnacheev


чт, 5 мар. 2020 г. в 02:39, narges saleh :

> Hi All,
>
> Is it possible to set up a spring JDBCtemplate, using ignite JDBC URL
> connection with ignite data streamer enabled (for persistence to cache),
> something similar to following?
>
> jdbc:ignite:cfg://cache=PERSON:streaming=true:streamingFlushFrequency=2000
> @file:///opt/ignite/examples/config/myconfig.xml
>
> If yes, can you provide an example?
>
> thanks.
>


Re: How to get client username in GridSecurityProcessor implementation

2020-03-06 Thread Ilya Kasnacheev
Hello!

I think you should do securityCtx.subject().login().

Is it available to you?

Regards,
-- 
Ilya Kasnacheev


вт, 3 мар. 2020 г. в 05:24, Devin Bost :

> I'm trying to figure out how to obtain the client username from inside the
> GridSecurityProcessor's authorize method, but the only place where I can
> find the correct username is here:
>
> ((UserSecurityContextImpl)
> securityCtx).authContext.credentials().getLogin()
>
> The problem, however, is that authContext is a private field, so I'd
> either need to modify the field permission in a fork of Ignite, or I'd need
> to use reflection to change the accessibility, which will result in very
> bad performance.
>
> Is there another way to obtain the actual username?
>
> Thanks,
> Devin G. Bost
>


Re: Accessing cache from Ignite plugin

2020-03-06 Thread Ilya Kasnacheev
Hello!

This really depends on where exactly you are trying to access it from. Is
it possible that you try to access cache before cluster is fully up? Why
not use ignite instance instead?

Regards,
-- 
Ilya Kasnacheev


вт, 3 мар. 2020 г. в 00:35, devinbost :

> Hi,
>
> I have an Ignite plugin that needs to check one of the Ignite caches with
> every operation.
> The plugin was working fine until I tried using the thin client to access
> one of the caches... Now, I'm just getting "Ignite cluster is unavailable"
> when this line gets run:
>
> ClientCache cache =
> igniteClient.getOrCreateCache("operations-cache");
>
> Is there a better way to access an Ignite cache from within an Ignite
> plugin?
>
> Thanks,
>
> Devin
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite Upgrades

2020-03-06 Thread Ilya Kasnacheev
Hello!

1) No, we currently do not.
2) You can create and destroy caches in runtime, the rest of confguration
options usually not modifiable.
3) You can start and stop services in runtime.
4) We maintain native persistence compatibility between minor versions.

Regards,
-- 
Ilya Kasnacheev


чт, 5 мар. 2020 г. в 14:09, narges saleh :

> Hi All,
>  I would appreciate your replies to following questions regarding various
> ignite upgrade scenarios.
>
> 1) How do I upgrade ignite without outage? Does it allow for rolling
> upgrades?
> 2) How do I add submit a new configuration file, say as a result of adding
> or modifying some new/existing caches, without an outage?
> 3) How do I replace or add services without an outage?
> 4) How does upgrade work with native persistence,if there are cache
> changes?
>
> Any documentation?
>
> thanks.
>
>


Re: Yarnclient killApplication method throwing java.nio.channels.ClosedByInterruptException

2020-03-06 Thread Ilya Kasnacheev
Hello!

I'm afraid you will have to figure this out, we don't see a lot of
real-world YARN expertise around.

Regards,
-- 
Ilya Kasnacheev


пт, 28 февр. 2020 г. в 17:09, ChandanS :

> While I am trying to kill my ignite yarn application, killApplication
> method
> is throwing the below exception and I am not able to kill my yarn job from
> the application.
>
> java.io.IOException: Failed on local exception:
> java.nio.channels.ClosedByInterruptException; Host Details : local host is:
> "xxx.intqa.bigdata.int.thomsonreuters.com/xx.1xx.xx8.2xx"; destination
> host is: "xx.int.westgroup.com":8032;
>
> Code snippet:
> try {
>   logInfo(s">>> Killing existing ignite yarn job with APP ID:
> ${applicationID.toString()}")
>   val yarnClient = getYarnClient(conf)
>   if (yarnClient != null) {
> yarnClient.killApplication(applicationID)
>   }
> } catch {
>   case exp: Exception => {
> logError(s">>> Failed to stop ignite yarn APP: \n$exp")
>   }
> }
>
>
> Below is part of my application log:
>
> 20/02/28 05:05:00 INFO api.StartStandalone: >>> Killing existing ignite
> yarn
> job with APP ID: 1564355610025_539606
> 20/02/28 05:05:00 ERROR api.StartStandalone: >>> Failed to stop ignite yarn
> APP:
> java.io.IOException: Failed on local exception:
> java.nio.channels.ClosedByInterruptException; Host Details : local host is:
> "xx.intqa.bigdata.int.thomsonreuters.com/xx.1xx.xx8.2xx"; destination
> host is: "cx.int.westgroup.com":8032;
>
>
> Need help on how to resolve this issue?
>
>
> Thanks,
> Chandan
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: When trying to activate cluster to use ignite native persistence getting topology update failed error .

2020-03-04 Thread Ilya Kasnacheev
Hello!

By default, information is logged to console (standard out). Please start
your node with -DIGNITE_QUIET=false (or bin/ignite.sh -v) and copy-paste
its output as you try to activate cluster.

Regards,
-- 
Ilya Kasnacheev


чт, 27 февр. 2020 г. в 08:39, Preet :

> I am new to Ignite. I don't know how to check log. When I searched then I
> got
> to know that it is under /work/logs but I think that are static file. I
> don't know why activating cluster node is failing. I want to use
> persistence
> enabled feature for my IGFS application.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: org.apache.ignite.IgniteException: Invalid message type: -4692

2020-03-04 Thread Ilya Kasnacheev
Hello!

I have trouble understanding your configuration around lines such as



Regards,
-- 
Ilya Kasnacheev


пт, 28 февр. 2020 г. в 08:30, Aditya Gupta :

> Hi,
>
> yes , Server node ignite caches are getting updated with feeds all the
> time.
> we are using server node in full in memory node with no native persistence
> enabled
>
> below is the config for server node
>  class="com.rbsfm.fi.risk.aggregation.utility.IgniteStarterBean"
> init-method="startIgnite">
> 
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
>  value="${runtime.dir}/work" />
>  value="${serverName}" />
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>  value="+20" />
>  value="0" />
> 
> 
>
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>  name="addresses" ref="ClusterNodes" />
> 
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>  value="NONE" />
>  name="defaultDataRegionConfiguration">
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>  name="name" value="Default_Region" />
>  name="maxSize" value="#{10L * 1024 * 1024 * 1024}" />
>  name="persistenceEnabled" value="false" />
>  name="metricsEnabled" value="true"/>
> 
> 
>  name="systemRegionMaxSize" value="#{512L * 1024 * 1024}"/>
> 
> 
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>  value="+21}" />
>  value="0" />
>  value="1" />
>  name="socketWriteTimeout" value="3" />
> 
> 
> 
>  class="org.apache.ignite.configuration.ClientConnectorConfiguration">
>  />
>  value="0" />
> 
> 
> 
>  class="org.apache.ignite.configuration.ConnectorConfiguration">
>  />
>  value="0" />
> 
> 
> 
> 
> 
> we are using data streamers to load data in caches
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Queue read bug

2020-03-04 Thread Ilya Kasnacheev
Hello!

Can you please create a small stand-alone reproducer project for this
issue, upload it on e.g. github?

Thanks,
-- 
Ilya Kasnacheev


пт, 28 февр. 2020 г. в 13:26, Narsi Reddy Nallamilli <
narsi.nallami...@gmail.com>:

> Hi,
>
> When queue is create with combination of nodefilter and groupname below
> first block, ignite returns null upon reading the same queue below second
> block.
>
> CollectionConfiguration colCfg = new CollectionConfiguration();
> colCfg.setCollocated(true);
> colCfg.setGroupName("queues");
> colCfg.setNodeFilter(igniteSpringBean.cluster().forServers().predicate());
> IgniteQueue queue = igniteSpringBean.queue("queue",10,colCfg);
>
> System.out.println("Queue created..."+igniteSpringBean.queue("queue",0,null));
>
>


Re: Message are not going through

2020-02-27 Thread Ilya Kasnacheev
Hello!

I recommend to avoid including of any images or formatting in your mails.

You can reference logs as external resource, such as on Pastebin or gist.

Regards,
-- 
Ilya Kasnacheev


чт, 27 февр. 2020 г. в 10:23, Prasad Bhalerao :

> Hi,
> I am trying to send an email regarding transaction failure
> Whenever I type a text message with logs and exception, my mails are not
> showing up in ignite user list as well as in dev list? I must have resent
> an email at least 10 time by now but no luck. Is there anyway to check
> what's going on?
>
>
>
> Thanks,
> Prasad
>


Re: NodeOrder in GridCacheVersion

2020-02-26 Thread Ilya Kasnacheev
Hello!

I don't think this is userlist discussion, this logging is not aimed at
end-user and you are not supposed to act on it.

Do you have any context for us, such as reproducer project or complete logs?

Regards,
-- 
Ilya Kasnacheev


ср, 26 февр. 2020 г. в 19:13, Prasad Bhalerao :

> Can someone please advise?
>
> On Wed 26 Feb, 2020, 12:23 AM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com wrote:
>
>> Hi,
>>
>>> Ignite Version: 2.6
>>> No of nodes: 4
>>>
>>> I am getting following exception while committing transaction.
>>>
>>> Although I just reading the value from this cache inside transaction and
>>> I am sure that the  cache and "cache entry" read is not being modified out
>>> this transaction on any other node.
>>>
>>> So I debugged the code and found out that it fails in following code on
>>> 2 nodes out of 4 nodes.
>>>
>>> GridDhtTxPrepareFuture#checkReadConflict -
>>> GridCacheEntryEx#checkSerializableReadVersion
>>>
>>> GridCacheVersion version failing for equals check are given below for 2
>>> different caches. I can see that it failing because of change in nodeOrder
>>> of cache.
>>>
>>> 1) Can some please explain the significance of the nodeOrder w.r.t Grid
>>> and cache? When does it change?
>>> 2) How to solve this problem?
>>>
>>> Cache : Addons (Node 2)
>>> serReadVer of entry read inside Transaction: GridCacheVersion
>>> [topVer=194120123, order=4, nodeOrder=2]
>>> version on node3: GridCacheVersion [topVer=194120123, order=4,
>>> nodeOrder=1]
>>>
>>> Cache : Subscription  (Node 3)
>>> serReadVer of entry read inside Transaction:  GridCacheVersion
>>> [topVer=194120123, order=1, nodeOrder=2]
>>> version on node2:  GridCacheVersion [topVer=194120123, order=1,
>>> nodeOrder=10]
>>>
>>>
>>> *EXCEPTION:*
>>>
>>> Caused by:
>>> org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException:
>>> Failed to prepare transaction, read/write conflict
>>>
>>
>>
>>>
>>> Thanks,
>>> Prasad
>>>
>>


Re: When trying to activate cluster to use ignite native persistence getting topology update failed error .

2020-02-26 Thread Ilya Kasnacheev
Hello!

Do you see anything in server nodes' logs?

Regards,
-- 
Ilya Kasnacheev


ср, 26 февр. 2020 г. в 14:31, Preet :

> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2787/Screenshot_2020-02-26_at_4.png
> >
> This is the error I received. Please help me to resolve this error. I want
> to use native persistence feature.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Re: Sequence with ODBC

2020-02-26 Thread Ilya Kasnacheev
Hello!

I think you can implement e.g. Hi/Lo algorithm in software:

https://en.wikipedia.org/wiki/Hi/Lo_algorithm

Regards,
-- 
Ilya Kasnacheev


вт, 25 февр. 2020 г. в 15:36, Abhay Gupta :

> Can you pls suggest some alternate way of generating the same in the
> application which Is good for Primary Key like UUID where the issue is that
> it is 128bit .
>
>
>
> Regards,
>
>
>
> Abhay Gupta
>
>
>
> *From: *Igor Sapego 
> *Sent: *25 February 2020 17:37
> *To: *user 
> *Subject: *Re: Sequence with ODBC
>
>
>
> There is no such option currently, AFAIK
>
>
> Best Regards,
>
> Igor
>
>
>
>
>
> On Tue, Feb 25, 2020 at 3:02 PM Abhay Gupta  wrote:
>
> Hi ,
>
>
>
> Do we have a way to have Autoincrement field in Database for use in Thin
> Client / UNIX ODBC . The Atomic sequence help is available with Java when
> JAVA ignite is used but it does not tell if the same is available through
> thin client protocol or not .
>
>
>
> Regards,
>
>
>
> Abhay Gupta
>
>
>
>
>


Re: org.apache.ignite.IgniteException: Invalid message type: -4692

2020-02-26 Thread Ilya Kasnacheev
Hello!

Anything else happening in the meantime?

This one looks like something non-Ignite has connected to Communication
port.

Regards,
-- 
Ilya Kasnacheev


ср, 26 февр. 2020 г. в 12:50, adiagarwal29 :

> Hi Guys, I am new to the forum and I am getting below exception
> intermittently on server node while launching client node
> I have one server node and one client node and I am getting this exception
> from the server node.
> I am using 2.7.5 version.
>
> 2020-02-26 06:37:24.045 +
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi INFO
> [grid-nio-worker-tcp-comm-1-#25%rates-marple-pnlswaps%] : Accepted incoming
> communication connection [locAddr=/28.97.136.58:20121,
> rmtAddr=/28.97.136.58:57160]
> 2020-02-26 06:37:24,045 ERROR GridDirectParser Failed to read message
> [msg=null, buf=java.nio.DirectByteBuffer[pos=6 lim=420 cap=32768],
> reader=null, ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker
> [super=AbstractNioClientWorker [idx=1, bytesRcvd=1260, bytesSent=0,
> bytesRcvd0=420, bytesSent0=0, select=true, super=GridWorker
> [name=grid-nio-worker-tcp-comm-1, igniteInstanceName=rates-marple-pnlswaps,
> finished=false, heartbeatTs=1582699044040, hashCode=1259807992,
> interrupted=false,
> runner=grid-nio-worker-tcp-comm-1-#25%rates-marple-pnlswaps%]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=6 lim=420 cap=32768],
> inRecovery=null,
> outRecovery=null, super=GridNioSessionImpl [locAddr=/28.97.136.58:20121,
> rmtAddr=/28.97.136.58:57160, createTime=1582699044040, closeTime=0,
> bytesSent=0, bytesRcvd=420, bytesSent0=0, bytesRcvd0=420,
> sndSchedTime=1582699044040, lastSndTime=1582699044040,
> lastRcvTime=1582699044040, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=o.a.i.i.util.nio.GridDirectParser@5ddc3911, directMode=true],
> GridConnectionBytesVerifyFilter], accepted=true, markedForClose=false]]]
> class org.apache.ignite.IgniteException: Invalid message type: -4692
> at
>
> org.apache.ignite.internal.managers.communication.GridIoMessageFactory.create(GridIoMessageFactory.java:1140)
> at
>
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$6.create(TcpCommunicationSpi.java:2305)
> at
>
> org.apache.ignite.internal.util.nio.GridDirectParser.decode(GridDirectParser.java:81)
> at
>
> org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:114)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
>
> org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:123)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3550)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:1310)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2386)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2153)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1794)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ClusterTopologyServerNotFoundException

2020-02-26 Thread Ilya Kasnacheev
Hello!

Unfortunately there is not enough information. What is partition mapping of
this cache? What is the state of cluster?

Regards.
-- 
Ilya Kasnacheev


вт, 25 февр. 2020 г. в 11:09, prudhvibiruda :

> Hi,
> Even I am getting the same exception with cachemode.replicated.
> But my requirement is that my ignite node shouldn't wait for other nodes in
> the cluster.
> In our case , even when one node is down , the other should be working.So
> that's why we didn't define the baseline topology.
> Can you give any alternate solution for this.
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2790/Capture.png>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ERROR: h2 Unsupported connection setting "MULTI_THREADED"

2020-02-25 Thread Ilya Kasnacheev
Hello!

I recommend tweaking your dependency importing to make sure you exclude all
H2 versions but the one needed by Apache Ignite.

With Maven, mvn dependency:tree to the rescue.

H2 is currently the heart of Ignite's SQL and, as such, it is not
negotiable.

Regards,
-- 
Ilya Kasnacheev


вт, 25 февр. 2020 г. в 16:19, Andrew Munn :

> Yes I'm using spring boot.  Can Ignite be updated to work with the latest
> h2?
>
> On Fri, Feb 21, 2020, 6:25 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I've heard about issues with e.g. Spring Boot overriding h2 database
>> version and breaking our runtime. I'm not sure who else does that.
>>
>> Regards,
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 20 февр. 2020 г. в 19:24, Andrew Munn :
>>
>>> Thanks.  Adding this runtime dependency to build.gradle fixed it:
>>>
>>> dependencies {
>>> runtime("com.h2database:h2:1.4.197")
>>> ...
>>> compile group: 'org.apache.ignite', name: 'ignite-spring', version:
>>> '2.7.6'
>>> compile group: 'org.apache.ignite', name: 'ignite-core', version:
>>> '2.7.6'
>>> }
>>>
>>> But I suspect this should be getting enforced automatically if using h2
>>> ver1.4.200 breaks something.  Am I specifying the Ignite dependency
>>> incorrectly?
>>>
>>>
>>> On Thu, Feb 20, 2020 at 4:08 AM Taras Ledkov 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Ignite uses H2 version 1.4.197 (see [1]).
>>>>
>>>>
>>>> [1]. https://github.com/apache/ignite/blob/master/parent/pom.xml#L74
>>>>
>>>> On 20.02.2020 4:36, Andrew Munn wrote:
>>>> > I'm building/running my client app w/Gradle and I'm seeing this
>>>> > error.  Am I overloading the Ingite H2 fork with the real H2 or
>>>> > something?  It appears I have the latest h2:
>>>> >
>>>> > [.gradle]$ find ./ -name *h2*
>>>> > ./caches/modules-2/metadata-2.82/descriptors/com.h2database
>>>> > ./caches/modules-2/metadata-2.82/descriptors/com.h2database/h2
>>>> > ./caches/modules-2/files-2.1/com.h2database
>>>> > ./caches/modules-2/files-2.1/com.h2database/h2
>>>> >
>>>> ./caches/modules-2/files-2.1/com.h2database/h2/1.4.200/6178ecda6e9fea8739a3708729efbffd88be43e3/h2-1.4.200.pom
>>>> >
>>>> ./caches/modules-2/files-2.1/com.h2database/h2/1.4.200/f7533fe7cb8e99c87a43d325a77b4b678ad9031a/h2-1.4.200.jar
>>>> >
>>>> >
>>>> >
>>>> > 2020-02-19 19:59:28.229 ERROR 102356 --- [   main]
>>>> > o.a.i.internal.IgniteKernal%dev-cluster  : Exception during start
>>>> > processors, node will be stopped and close connections
>>>> > org.apache.ignite.internal.processors.query.IgniteSQLException:
>>>> Failed
>>>> > to initialize system DB connection:
>>>> >
>>>> jdbc:h2:mem:b52dce26-ba01-4051-9130-e087e19fab4f;LOCK_MODE=3;MULTI_THREADED=1;DB_CLOSE_ON_EXIT=FALSE;DEFAULT_LOCK_TIMEOUT=1;FUNCTIONS_IN_SCHEMA=true;OPTIMIZE_REUSE_RESULTS=0;QUERY_CACHE_SIZE=0;MAX_OPERATION_MEMORY=0;BATCH_JOINS=1;ROW_FACTORY="org.apache.ignite.internal.processors.query.h2.opt.GridH2PlainRowFactory";DEFAULT_TABLE_ENGINE=org.apache.ignite.internal.processors.query.h2.opt.GridH2DefaultTableEngine
>>>> > at
>>>> >
>>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.systemConnection(IgniteH2Indexing.java:434)
>>>>
>>>> > ~[ignite-indexing-2.7.6.jar:2.7.6]
>>>> > at
>>>> >
>>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSystemStatement(IgniteH2Indexing.java:699)
>>>>
>>>> > ~[ignite-indexing-2.7.6.jar:2.7.6]
>>>> > at
>>>> >
>>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.createSchema0(IgniteH2Indexing.java:646)
>>>>
>>>> > ~[ignite-indexing-2.7.6.jar:2.7.6]
>>>> > at
>>>> >
>>>> org.a

Re: Read through not working as expected in case of Replicated cache

2020-02-25 Thread Ilya Kasnacheev
Hello!

I think this is by design. You may suggest edits on readme.io.

Regards,
-- 
Ilya Kasnacheev


пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao :

> Hi,
>
> Is this a bug or the cache is designed to work this way?
>
> If it is as-designed, can this behavior be updated in ignite documentation?
>
> Thanks,
> Prasad
>
> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I have discussed this with fellow Ignite developers, and they say read
>> through for replicated cache would work where there is either:
>>
>> - writeThrough enabled and all changes do through it.
>> - database contents do not change for already read keys.
>>
>> I can see that neither is met in your case, so you can expect the
>> behavior that you are seeing.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde :
>>
>>> I am using Ignite 2.6 version.
>>>
>>> I am starting 3 server nodes with a replicated cache and 1 client node.
>>> Cache configuration is as follows.
>>> Read-through true on but write-through is false. Load data by key is
>>> implemented as given below in cache-loader.
>>>
>>> Steps to reproduce issue:
>>> 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
>>> is just removed from cache but present in DB as write-through is false)
>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>>> 3) Now query the cache from client node. Every invocation returns
>>> different results.
>>> Sometimes it returns reloaded entry, sometime returns the results
>>> without reloaded entry.
>>>
>>> Looks like read-through is not replicating the reloaded entry on all
>>> nodes in case of REPLICATED cache.
>>>
>>> So to investigate further I changed the cache mode to PARTITIONED and
>>> set the backup count to 3 i.e. total number of nodes present in cluster (to
>>> mimic REPLICATED behavior).
>>> This time it worked as expected.
>>> Every invocation returned the same result with reloaded entry.
>>>
>>> *  private CacheConfiguration networkCacheCfg() {*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *CacheConfiguration networkCacheCfg = new
>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>>> <http://CacheName.NETWORK_CACHE.name>());
>>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>>> networkCacheCfg.setWriteThrough(false);
>>> networkCacheCfg.setReadThrough(true);
>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>>> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>>   //networkCacheCfg.setBackups(3);
>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>>> Factory storeFactory =
>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>>> NetworkData.class);networkCacheCfg.setSqlIndexMaxInlineSize(65);
>>> RendezvousAffinityFunction affinityFunction = new
>>> RendezvousAffinityFunction();
>>> affinityFunction.setExcludeNeighbors(false);
>>> networkCacheCfg.setAffinity(affinityFunction);
>>> networkCacheCfg.setStatisticsEnabled(true);   //
>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());return
>>> networkCacheCfg;  }*
>>>
>>> @Override
>>> public V load(K k) throws CacheLoaderException {
>>> V value = null;
>>> DataSource dataSource = springCtx.getBean(DataSource.class);
>>> try (Connection connection = dataSource.getConnection();
>>>  PreparedStatement statement = 
>>> connection.prepareStatement(loadByKeySql)) {
>>> //statement.setObject(1, k.getId());
>>> setPreparedStatement(statement,k);
>>> try (ResultSet rs = statement.executeQuery()) {
>>> if (rs.next()) {
>>> value = rowMapper.mapRow(rs, 0);
>>> }
>>> }
>>> } catch (SQLException e) {
>>>
>>> throw new CacheLoaderException(e.getMessage(), e);
>>> }
>>>
>>> return value;
>>> }
>>>
>>>
>>> Thanks,
>>>
>>> Akash
>>>
>>>


Re: Connection reset by peer

2020-02-21 Thread Ilya Kasnacheev
Hello!

[01:07:25,966][WARNING][tcp-comm-worker-#1%SubScriptionCluster%][TcpCommunicationSpi]
Connect timed out (consider increasing 'failureDetectionTimeout'
configuration property) [addr=/172.16.99.27:47100,
failureDetectionTimeout=10]
Failed to send message (node
may have left the grid or TCP connection cannot be established due to
firewall issues)

Looks like massive network problems to me, or very long GC.

Maybe your firewall is too restrictive? Please note that we expect that all
client and server nodes can open connections to each other.

Regards,
-- 
Ilya Kasnacheev


пт, 21 февр. 2020 г. в 06:16, hulitao198758 :

> Detailed error message:
>
> Established outgoing communication connection [locAddr=/
> 10.122.64.129:40870,
> rmtAddr=/10.122.64.128:48339]
>
> [01:01:59,775][SEVERE][grid-nio-worker-tcp-comm-1-#48%SubScriptionCluster%][TcpCommunicationSpi]
> Failed to process selector key [ses=GridSelectorNioSessionImpl
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=1,
> bytesRcvd=1151627, bytesSent=2289496, bytesRcvd0=64, bytesSent0=537,
> select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-1,
> igniteInstanceName=SubScriptionCluster, finished=false, hashCode=440471613,
> interrupted=false,
> runner=grid-nio-worker-tcp-comm-1-#48%SubScriptionCluster%]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> inRecovery=GridNioRecoveryDescriptor [acked=2651, resendCnt=0, rcvCnt=2752,
> sentCnt=2652, reserved=true, lastAck=2752, nodeLeft=false,
> node=TcpDiscoveryNode [id=09e38b03-ea63-47bf-a27b-661a0f0ab8d8,
> addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.16.99.27],
> sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /172.16.99.27:0],
> discPort=0, order=44, intOrder=26, lastExchangeTime=1581939704719,
> loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=true],
> connected=true,
> connectCnt=0, queueLimit=4096, reserveCnt=225, pairedConnections=false],
> outRecovery=GridNioRecoveryDescriptor [acked=2651, resendCnt=0,
> rcvCnt=2752,
> sentCnt=2652, reserved=true, lastAck=2752, nodeLeft=false,
> node=TcpDiscoveryNode [id=09e38b03-ea63-47bf-a27b-661a0f0ab8d8,
> addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.16.99.27],
> sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /172.16.99.27:0],
> discPort=0, order=44, intOrder=26, lastExchangeTime=1581939704719,
> loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=true],
> connected=true,
> connectCnt=0, queueLimit=4096, reserveCnt=225, pairedConnections=false],
> super=GridNioSessionImpl [locAddr=/10.122.64.129:48339,
> rmtAddr=/10.122.23.179:33052, createTime=1582183927690, closeTime=0,
> bytesSent=33096, bytesRcvd=29254, bytesSent0=211, bytesRcvd0=0,
> sndSchedTime=1582246566604, lastSndTime=1582246918763,
> lastRcvTime=1582246566604, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=o.a.i.i.util.nio.GridDirectParser@7419c93, directMode=true],
> GridConnectionBytesVerifyFilter], accepted=true]]]
> java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:1250)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:748)
>
> [01:01:59,776][WARNING][grid-nio-worker-tcp-comm-1-#48%SubScriptionCluster%][TcpCommunicationSpi]
> Closing NIO session because of unhandled exception [cls=class
> o.a.i.i.util.nio.GridNioException, msg=Connection reset by peer]
>
> [01:02:14,897][WARNING][tcp-comm-worker-#1%SubScriptionCluster%][TcpCommunicationSpi]
> Connect timed out (consider increasing 'failureDetectionTimeout'
> configuration property) [addr=/172.16.99.27:47100,
> failureDetectionTimeout=10]
>
> [01:02:14,898][WARNING][tcp-comm-worker-#1%SubScriptionCluster%][TcpCommunicationSpi]
> Connect timed out (consider increasing 'failureDetectionTimeout'
> configuration prop

<    3   4   5   6   7   8   9   10   11   12   >