Re: OOM

2023-07-12 Thread Ilya Korol
What kind of OOM you've faced? Lack of free memory in data region or in 
Java Heap?


Consider to share some logs.

On 12.07.2023 18:19, Arunima Barik wrote:

I have enabled Page eviction to Random-2-LRU for default region

Still whenever I write a big Spark DataFrame to Ignite
Like spark_df.write.format("ignite")
It is throwing Java OOM error

Why so.. Can anyone please tell me.

Regards
Arunima Barik


Re: How to set -DIGNITE_QUIET=false in service.sh?

2023-02-01 Thread Ilya Korol
As a workaroud you can try to add *-v* option to ignite.sh, this should 
enable verbose logging (e.g. non quiet logging)


01.02.2023 17:38, Jeremy McMillan пишет:
This seems to be at the level of a high quality bug report, and has 
enough detail that a fix could probably be implemented and submitted 
as a PR fairly easily. Are you familiar with the contributor process?


On Wed, Feb 1, 2023, 04:56 Айсина Роза Мунеровна 
 wrote:


Hola!
We run Ignite via service from DEB package and use service.sh to
start it.

In the service file we set env *JVM_OPTS* like this:

*Environment="JVM_OPTS=-server -Xms10g -Xmx10g … …
 -DIGNITE_QUIET=false”*

The problem is that *parseargs.sh* (look here
)
has default option *-DIGNITE_QUIET=true*,
*which is not propagated in service.sh*:

/usr/share/apache-ignite/bin/ignite.sh /etc/apache-ignite/$2 &
echo $! >> /var/run/apache-ignite/$2.pid

In service.sh we can pass only configuration so result options
look like that:

*[2023-02-01T09:42:43,105][INFO ][main][IgniteKernal] VM
arguments: […,  -Xms10g, ..., -DIGNITE_QUIET=false,
-Dfile.encoding=UTF-8, -DIGNITE_QUIET=true,
-DIGNITE_SUCCESS_FILE=..., -DIGNITE_HOME=..., -DIGNITE_PROG_NAME=...]*

Is there any way to set -DIGNITE_QUIET=false for service.sh
without manually patching it?
Maybe there is more priority option in configuration?

Thanks!

*--*

*Роза Айсина*

Старший разработчик ПО

*СберМаркет* | Доставка из любимых магазинов

Email: roza.ays...@sbermarket.ru 

Mob:

Web: sbermarket.ru 

App: iOS


и Android


*УВЕДОМЛЕНИЕ О КОНФИДЕНЦИАЛЬНОСТИ:* это электронное сообщение и
любые документы, приложенные к нему, содержат конфиденциальную
информацию. Настоящим уведомляем Вас о том, что, если это
сообщение не предназначено Вам, использование, копирование,
распространение информации, содержащейся в настоящем сообщении, а
также осуществление любых действий на основе этой информации,
строго запрещено. Если Вы получили это сообщение по ошибке,
пожалуйста, сообщите об этом отправителю по электронной почте и
удалите это сообщение.
*CONFIDENTIALITY NOTICE:* This email and any files attached to it
are confidential. If you are not the intended recipient you are
notified that using, copying, distributing or taking any action in
reliance on the contents of this information is strictly
prohibited. If you have received this email in error please notify
the sender and delete this email.


Re: How to Create a Case-Insensitive Index

2023-01-13 Thread Ilya Korol
What kind of exception? Do you have a stacktrace? Do you observe any 
errors/messages in server logs? Anyway Ignite might not support this 
feature yet.

Feel free to submit a Jira with feature request.

13.01.2023 14:34, Biraj Deb пишет:

I have tried
CREATE INDEX title_idx ON books (lower(title));
but getting SQLException.
Basically I want to configure the index in my Ignite XML Configuration.





referenceId
customerId






In above in have define one index and now need to create one more 
index on name field which should be case-insensitive


On Thu, Jan 12, 2023 at 9:46 PM Ilya Korol  wrote:

Hi, usually this can be achieved by using lower() function in index
definition. I guess it should be something like:

CREATE INDEX title_idx ON books (lower(title));

I'm not sure whether Ignite supports such feature, but you can
give it a
try.
Btw, to exploit this index  you also would have to use lower()
function
in you queries as well.

Useful link:

https://use-the-index-luke.com/sql/where-clause/functions/case-insensitive-search

12.01.2023 17:56, Biraj Deb пишет:
> Hello,
> I am writing to inquire about creating a case-insensitive index in
> Ignite's XML Configuration. I have been trying to implement this
> feature in my current project, but I am having some difficulty.
> I understand that ignite supports case-sensitive indexing by
default,
> but I would like to know if there is a way to configure it for
> case-insensitive indexing. If anyone has experience with this or
can
> point me in the right direction, it would be greatly appreciated.
> Thank you for your time and expertise.
> Best regards
> Biraj Deb


Re: How to Create a Case-Insensitive Index

2023-01-12 Thread Ilya Korol
Hi, usually this can be achieved by using lower() function in index 
definition. I guess it should be something like:


CREATE INDEX title_idx ON books (lower(title));

I'm not sure whether Ignite supports such feature, but you can give it a 
try.
Btw, to exploit this index  you also would have to use lower() function 
in you queries as well.


Useful link: 
https://use-the-index-luke.com/sql/where-clause/functions/case-insensitive-search


12.01.2023 17:56, Biraj Deb пишет:

Hello,
I am writing to inquire about creating a case-insensitive index in 
Ignite's XML Configuration. I have been trying to implement this 
feature in my current project, but I am having some difficulty.
I understand that ignite supports case-sensitive indexing by default, 
but I would like to know if there is a way to configure it for 
case-insensitive indexing. If anyone has experience with this or can 
point me in the right direction, it would be greatly appreciated.

Thank you for your time and expertise.
Best regards
Biraj Deb


Re: Multiple key mapping to same bean

2022-10-19 Thread Ilya Korol
In this case you can also try to exploit Ignite Services 
(https://ignite.apache.org/docs/latest/services/services). So in service 
you would implement and hide all logic for resolving required value.
Service itself would can be executed on data nodes thus avoiding round 
trips between client and server nodes.


19.10.2022 08:05, Aravind J пишет:

Hi ,

Thank you very much for your response.

Do you use a multi node cluster?  Yes , we are using multi node cluster

When I said beans , I meant simple POJO .

Thank you for confirming that it creates duplicate entries and 
infact with a test also it is confirmed . Problem with the two cache 
approach you suggested is , it requires 2 GET requests , if avg GET 
latency is 10 ms , for 2 cache gets doubled to 20 ms . either we will 
require some sort of server side function or mapping like 




On Tue, 18 Oct 2022 at 21:49, Ilya Korol  wrote:

Hi, do you use multinode cluster? What do you mean by 'beans', is it
simple POJO or Spring bean?

I don't think that this is a good idea to store Spring beans in
ignite
cache, because all data that goes to cache would be transformed to so
called BinaryObjects (However you can give it a try with on-heap
caching
https://ignite.apache.org/docs/latest/configuring-caches/on-heap-caching).

If we're talking about storing simple POJOs, I doubt that Ignite
caches
will realize that you want to avoid duplication, so internally in
cache
this single value object (that was shared in Map) will be
transformed to
multiple BinaryObjects (that will have exactly same content). You can
try to use two caches C1, C2, where in C1 we actually store
intermediate
key to unique value instance and in C2 we store key ->
intermediate_key
mapping.

On 2022/10/14 06:31:55 Aravind J wrote:
 > Hi ,
 >
 > We had a requirement to make multiple keys mapping to the same
bean in
 > Ignite cache , is it possible ? As SQL query performance is not
matching
 > with get by key , we are looking for possible alternatives . If
we use
 > , putAll(Map map) with different keys
and same
 > bean , how will bean get persisted , will it be duplicated in
memory ?
 >


RE: Multiple key mapping to same bean

2022-10-18 Thread Ilya Korol
Hi, do you use multinode cluster? What do you mean by 'beans', is it 
simple POJO or Spring bean?


I don't think that this is a good idea to store Spring beans in ignite 
cache, because all data that goes to cache would be transformed to so 
called BinaryObjects (However you can give it a try with on-heap caching 
https://ignite.apache.org/docs/latest/configuring-caches/on-heap-caching).


If we're talking about storing simple POJOs, I doubt that Ignite caches 
will realize that you want to avoid duplication, so internally in cache 
this single value object (that was shared in Map) will be transformed to 
multiple BinaryObjects (that will have exactly same content). You can 
try to use two caches C1, C2, where in C1 we actually store intermediate 
key to unique value instance and in C2 we store key -> intermediate_key 
mapping.


On 2022/10/14 06:31:55 Aravind J wrote:
> Hi ,
>
> We had a requirement to make multiple keys mapping to the same bean in
> Ignite cache , is it possible ? As SQL query performance is not matching
> with get by key , we are looking for possible alternatives . If we use
> , putAll(Map map) with different keys and same
> bean , how will bean get persisted , will it be duplicated in memory ?
>


RE: Ignite node is down

2022-08-24 Thread Ilya Korol

Hi,

According to logs you have the following:

[17:02:04,774][INFO][grid-nio-worker-tcp-comm-0-#40%MATCHERWORKER%][TcpCommunicationSpi] 
Accepted incoming communication connection 
[locAddr=/xx.xx.xxx.IP2:47100, rmtAddr=/xx.xx.xxx.IP1:52166]
*[18:54:12,125][SEVERE][exchange-worker-#65%MATCHERWORKER%][G] Blocked 
system-critical thread has been detected. This can lead to cluster-wide 
undefined behaviour [workerName=grid-nio-worker-tcp-comm-0, 
threadName=grid-nio-worker-tcp-comm-0-#40%MATCHERWORKER%, blockedFor=10s]*
[18:54:14,059][WARNING][exchange-worker-#65%MATCHERWORKER%][G] Thread 
[name="grid-nio-worker-tcp-comm-0-#40%MATCHERWORKER%", id=63, 
state=RUNNABLE, blockCnt=0, waitCnt=0]


[18:54:14,062][INFO][tcp-disco-sock-reader-[d6d591bd 
xx.xx.xxx.IP4:48633]-#8%MATCHERWORKER%][TcpDiscoverySpi] Finished 
serving remote node connection [rmtAddr=/xx.xx.xxx.IP4:48633, rmtPort=48633
[18:54:14,067][INFO][tcp-disco-srvr-[:47500]-#3%MATCHERWORKER%][TcpDiscoverySpi] 
TCP discovery accepted incoming connection [rmtAddr=/xx.xx.xxx.IP3, 
rmtPort=39939]
[18:54:14,067][INFO][tcp-disco-srvr-[:47500]-#3%MATCHERWORKER%][TcpDiscoverySpi] 
TCP discovery spawning a new thread for connection 
[rmtAddr=/xx.xx.xxx.IP3, rmtPort=39939]
[18:54:14,068][INFO][tcp-disco-sock-reader-[]-#11%MATCHERWORKER%][TcpDiscoverySpi] 
Started serving remote node connection [rmtAddr=/xx.xx.xxx.IP3:39939, 
rmtPort=39939]
[18:54:14,072][INFO][tcp-disco-sock-reader-[439d63a3 
xx.xx.xxx.IP3:39939]-#11%MATCHERWORKER%][TcpDiscoverySpi] Initialized 
connection with remote server node 
[nodeId=439d63a3-4d24-4e53-9507-581ce7e2347e, rmtAddr=/xx.xx.xxx.IP3:39939]
[18:54:14,075][INFO][tcp-disco-sock-reader-[439d63a3 
xx.xx.xxx.IP3:39939]-#11%MATCHERWORKER%][TcpDiscoverySpi] Finished 
serving remote node connection [rmtAddr=/xx.xx.xxx.IP3:39939, rmtPort=39939
[18:54:14,097][WARNING][exchange-worker-#65%MATCHERWORKER%][] Possible 
failure suppressed accordingly to a configured handler 
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet 
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], 
failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class 
o.a.i.IgniteException: GridWorker [name=grid-nio-worker-tcp-comm-0, 
igniteInstanceName=MATCHERWORKER, finished=false, 
heartbeatTs=1660915454055]]]
class org.apache.ignite.IgniteException: GridWorker 
[name=grid-nio-worker-tcp-comm-0, igniteInstanceName=MATCHERWORKER, 
finished=false, heartbeatTs=1660915454055]
    at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$3.apply(IgnitionEx.java:1810)
    at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$3.apply(IgnitionEx.java:1805)
    at 
org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:234)
    at 
org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
    at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3096)
    at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3063)
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)

    at java.base/java.lang.Thread.run(Thread.java:829)

It would be great to have a thread-dump for corresponding time, to see 
what kind of work did *grid-nio-worker-tcp-comm-0-#40%MATCHERWORKER%* 
perform. (Actually its better to have logs and thread-dumps for 
corresponding time from whole cluster).
You mentioned that this is one of the worker nodes, what does it mean? 
Do you run some specific tasks there, like IgniteCompute tasks, or 
something like this?


Meanwhile you can also try to update your setup, to modern versions like 
2.13.




On  2022/08/25 05:04:43 BEELA GAYATRI via user wrote:
> TCS Confidential
>
> Dear Team,
>
>
> We have 4 worker nodes in baseline topology .Every time one of the 
nodes is getting down with message " Node is out of topology (probably, 
due to short-time network problems)". Ignite log and configuration is 
attached in the mail.

> Here are few queries
>
> 1. Why every time the one of the node is getting down with that error
>
> 1. If the node is getting down , is there any way the node can be 
started automatically without manual intervention??

> 2.
>
> Thanks & Regards,
> Gayatri Beela
>
>
> TCS Confidential
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the messag

RE: ignite client can not reconnect to ignite Kubernetes cluster,after pod restart

2022-06-21 Thread Ilya Korol

Hi,

Please take look to 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/client/ClientAddressFinder.html, 

according to this ThinClientKubernetesAddressFinder should refresh 
address list on client connection failure, or you can try to set 
*paritionAwareness = true* in *ClientConfiguration*, that should force 
ip finder to proactively refresh address list.


On 2022/06/22 01:53:38 f cad wrote:
> below if client code config
> KubernetesConnectionConfiguration kcfg = new
> KubernetesConnectionConfiguration();
>
> 
kcfg.setNamespace(igniteK8sNameSpace);kcfg.setServiceName(igniteK8sServiceName);cfg.setAddressesFinder(new

> ThinClientKubernetesAddressFinder(kcfg));cfg.setRetryPolicy(new
> ClientRetryAllPolicy());
>
>
> after ignite pod restart
>
> client throw Exceptionorg.apache.ignite.client.ClientConnectionException:
> Connection timed out
> at 
org.apache.ignite.internal.client.thin.io.gridnioserver.GridNioClientConnectionMultiplexer.open(GridNioClientConnectionMultiplexer.java:144)
> at 
org.apache.ignite.internal.client.thin.TcpClientChannel.(TcpClientChannel.java:178)
> at 
org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.getOrCreateChannel(ReliableChannel.java:917)
> at 
org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.getOrCreateChannel(ReliableChannel.java:898)
> at 
org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.access$200(ReliableChannel.java:847)
> at 
org.apache.ignite.internal.client.thin.ReliableChannel.applyOnDefaultChannel(ReliableChannel.java:759)
> at 
org.apache.ignite.internal.client.thin.ReliableChannel.applyOnDefaultChannel(ReliableChannel.java:731)
> at 
org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:167)
> at 
org.apache.ignite.internal.client.thin.ReliableChannel.request(ReliableChannel.java:288)
> at 
org.apache.ignite.internal.client.thin.TcpIgniteClient.getOrCreateCache(TcpIgniteClient.java:185)

>
> and i use retry to reconnect and print
> clientConfiguration.getAddressesFinder().getAddresses() and it address is
> pod address,but client not reconnect
>
> while (retryTimeTmp < retryTimes) {
> try {
> return igniteClient.getOrCreateCache(new
> ClientCacheConfiguration()
> .setName(cacheName)
> .setAtomicityMode(TRANSACTIONAL)
> .setCacheMode(PARTITIONED)
> .setBackups(2)
> .setWriteSynchronizationMode(PRIMARY_SYNC));
> }catch (Exception e) {
> LOGGER.error("get cache [{}] not success", cacheName, e);
> LOGGER.error("get address info [{}], ipfinder [{}]",
> clientConfiguration.getAddresses(),
> clientConfiguration.getAddressesFinder().getAddresses());
>
> retrySleep();
> } finally {
> retryTimeTmp++;
> }
>

RE: Replicated cache in C++

2022-06-21 Thread Ilya Korol

Hi,

Asterisk (*) in cache name is just a flag for Ignite to not initiate 
this cache but instead register it as a template, that for example can 
be used further for SQL table creations:


CREATE TABLE IF NOT EXISTS Person (
  id int,
  city_id int,
  name varchar,
  age int,
  company varchar,
  PRIMARY KEY (id, city_id)
) WITH "template=partitioned,backups=1,affinity_key=city_id, 
key_type=PersonKey, value_type=MyPerson";


Since you are going to create just a simple REPLICATED cache, there is 
no need for cache template defined first.
If you need to create multiple caches with similar config just write a 
some kind of factory method in C++ that will return properly formed 
cache configuration that can be tuned in required on per-cache basis.



On 2022/06/21 11:46:19 "F.D." wrote:
> Hi Igniters,
> I'd like to create a new replicated cache using C++. I suppose I'll 
have to

> use a cache template, but it's not clear, for me, how to set the name of
> the template... for example the meaning of '*'.
>
> Thanks,
> F.D.
>


Re: Disable SIGTERM processing in Ignite

2022-06-21 Thread Ilya Korol

Hi, try to set java property IGNITE_NO_SHUTDOWN_HOOK=true.

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteSystemProperties.html#IGNITE_NO_SHUTDOWN_HOOK

21.06.2022 17:27, Yuriy Ivkin пишет:


Greetings!

I am using embedded Apache Ignite in my project and syncronyze several 
nodes through it. On the app start, after a cache obtained, it writes 
an available node's requisites into the cache. And I woud like to 
remove the requisites before a node shutdowns. But when I stop the app 
through the systemd Ignite shutdowns faster then I can write something 
in the cache. And I get the exception: 
org.apache.ignite.internal.processors.cache.CacheStoppedException: 
Failed to perform cache operation (cache is stopped).


Is it possible somehow disable SIGTERM processing by Ignite and allow 
the app control Ignite's shutdown ?


--
Best regards,
Yuriy Ivkin


Re: Analyze table query failing with syntax error from sqlline

2022-06-01 Thread Ilya Korol
Hi, are you able to run simple DML query like: SELECT * FROM 
PRODUCT_TABLE; ?


02.06.2022 00:13, Sachin janani пишет:

Hi,
I am trying to run Analyze table command from sqlline to collect the 
statistics of a table but its failing with parsing error. Following is 
the error that i am getting. Can someone please help me in getting the 
right syntax?. I am following this document: 
https://ignite.apache.org/docs/latest/SQL/sql-statistics



0: jdbc:ignite:thin://127.0.0.1/ > ANALYZE PRODUCT_TABLE;
Error: Failed to parse query. Syntax error in SQL statement "ANALYZE 
PRODUCT_TABLE[*]";  SQL statement:
ANALYZE PRODUCT_TABLE [42000-197] (state=42000,code=1001)
java.sql.SQLException: Failed to parse query. Syntax error in SQL statement "ANALYZE 
PRODUCT_TABLE[*]";  SQL statement:
ANALYZE PRODUCT_TABLE [42000-197]
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1009)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:234)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:560)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
0: jdbc:ignite:thin://127.0.0.1/ >

Thanks and Regards,
--
*/Sachin Janani/*

Re: Checkpointing is taking long time

2022-04-17 Thread Ilya Korol
1. We have DirectIO disabled : does enabling it impact the performance 
if enabled?
It should increase performance, but its always worth to do some 
benchmarks before using such features in production.


2. When we disabled throttling, we saw 5x better performance. Struggling 
load test completed in 1/5th of time. What are its side effects if we 
keep it disabled.
You can safely disable it. In this case throttling will still be 
present, but it would use less intelligent strategies (that also from 
time to time may work incorrectly)


3. Does MaxMemoryDirectSize have any relation to throughput rate.
I don't know anything regarding this.

From my perspective your checkpointBuffSize is enough, take a look to 
your log messages: cpBufUsed=133, cpBufTotal=1554645


Increasing checkpoint frequency should spread IO pressure more evenly 
over time, but as I mentioned before if you decide to increase/decrease 
any IO parameter you would be better to benchmark how it would impact 
your setup.


On 2022/04/17 05:43:57 Surinder Mehra wrote:
> Hey thanks for replying. we haven't configured the storage path so by
> default it should be in the work directory. work, wal, walarchive all 
three

> are SSDs. I have the following queries.
>
> 1. We have DirectIO disabled : does enabling it impact the performance if
> enabled?
> 2. When we disabled throttling, we saw 5x better performance. Struggling
> load test completed in 1/5th of time. What are its side effects if we 
keep

> it disabled
> 3. Does MaxMemoryDirectSize have any relation to throughput rate.
> 4. Can the current configuration mentioned in the previous thread be 
scaled

> further? like increasing WalSegment size beyond 1GB and related size of
> walArchive, checkpointbufferSize and MaxMemoryDirectSize jvm parameter.
> 5. We see now due to throttling disabled, WalArchive size is going beyond
> 50G(WalSegment size 1G and checkpoint buffer size 6G). would decreasing
> checkpoint frequency and/or increasing checkpoint threads count increase
> throughput or impact application writes inversely. Currently 
checkpointing

> frequency and threads are default
>
>
> On Sun, Apr 17, 2022 at 6:33 AM Ilya Korol  wrote:
>
> > Hi, From my perspective this looks like your physical storage is not
> > fast enough to handle incoming writes. markDirty speed is 2x times
> > faster that checkpointWrite even in the presence of throttling. You've
> > mentioned that ignite work folder stored on SSD, but what about PDS
> > folder (DataStorageConfiguration.setStoragePath())?
> >
> > Btw, have you tested your setup with DirectIO disabled?
> >
> > On 2022/04/14 10:55:23 Surinder Mehra wrote:
> > > Hi,
> > > We have an application with ignite thick clients which writes to 
ignite

> > > caches on ignite grid deployed separately. Below is the ignite
> > > configuration per node
> > >
> > > With this configuration, we see throttling happening and
> > checkpointing time
> > > is between 20-30 seconds. Did we miss something in configuration 
or any

> > > other settings we can enable. Any suggestions will be of great help.
> > >
> > > * 100-200 concurrent writes to 25 node cluster
> > > * #partitions 512
> > > * cache backups = 2
> > > * cache mode partitioned
> > > * syncronizationMode : primary Sync
> > > * Off Heap caches
> > > * Server nodes : 25
> > > * RAM : 64G
> > > * maxmemoryDirectSize : 4G
> > > * Heap: 25G
> > >
> > > * persistenceEnabled: true
> > > * data region size : 24GB
> > > * checkPointingBufferSize: 6gb
> > > * walSegmentSize: 1G
> > > * walBufferSize : 256MB
> > > * walarchiveSize: 24G
> > > * writeThrotlingEnabled: true
> > > * checkPointingfreq : 60 sec
> > > * checkPointingThreads: 4
> > > * DirectIO enabled: true
> > >
> > > SSDs atatched:
> > > work volume : 20G
> > > wal volume : 15G
> > > Wal archive volume : 26G
> > >
> > >
> > > Checkpointing logs:
> > >
> > > [10:27:13,237][INFO][db-checkpoint-thread-#230][Checkpointer] 
Checkpoint

> > > started [checkpointId=11749dc0-fd0d-4b5f-8b9a-510e774fec38,
> > > startPtr=WALPointer [idx=26, fileOff=385214751, len=16683],
> > > checkpointBeforeLockTime=29ms, checkpointLockWait=0ms,
> > > checkpointListenersExecuteTime=2ms, checkpointLockHoldTime=3ms,
> > > walCpRecordFsyncDuration=11ms, writeCheckpointEntryDuration=3ms,
> > > splitAndSortCpPagesDuration=30ms, pages=40505, reason='timeout']
> > > [10:27:13,242][INFO][sys-stripe-7-#8][PageMemoryImpl] Throttli

RE: Checkpointing is taking long time

2022-04-16 Thread Ilya Korol
Hi, From my perspective this looks like your physical storage is not 
fast enough to handle incoming writes. markDirty speed is 2x times 
faster that checkpointWrite even in the presence of throttling. You've 
mentioned that ignite work folder stored on SSD, but what about PDS 
folder (DataStorageConfiguration.setStoragePath())?


Btw, have you tested your setup with DirectIO disabled?

On 2022/04/14 10:55:23 Surinder Mehra wrote:
> Hi,
> We have an application with ignite thick clients which writes to ignite
> caches on ignite grid deployed separately. Below is the ignite
> configuration per node
>
> With this configuration, we see throttling happening and 
checkpointing time

> is between 20-30 seconds. Did we miss something in configuration or any
> other settings we can enable. Any suggestions will be of great help.
>
> * 100-200 concurrent writes to 25 node cluster
> * #partitions 512
> * cache backups = 2
> * cache mode partitioned
> * syncronizationMode : primary Sync
> * Off Heap caches
> * Server nodes : 25
> * RAM : 64G
> * maxmemoryDirectSize : 4G
> * Heap: 25G
>
> * persistenceEnabled: true
> * data region size : 24GB
> * checkPointingBufferSize: 6gb
> * walSegmentSize: 1G
> * walBufferSize : 256MB
> * walarchiveSize: 24G
> * writeThrotlingEnabled: true
> * checkPointingfreq : 60 sec
> * checkPointingThreads: 4
> * DirectIO enabled: true
>
> SSDs atatched:
> work volume : 20G
> wal volume : 15G
> Wal archive volume : 26G
>
>
> Checkpointing logs:
>
> [10:27:13,237][INFO][db-checkpoint-thread-#230][Checkpointer] Checkpoint
> started [checkpointId=11749dc0-fd0d-4b5f-8b9a-510e774fec38,
> startPtr=WALPointer [idx=26, fileOff=385214751, len=16683],
> checkpointBeforeLockTime=29ms, checkpointLockWait=0ms,
> checkpointListenersExecuteTime=2ms, checkpointLockHoldTime=3ms,
> walCpRecordFsyncDuration=11ms, writeCheckpointEntryDuration=3ms,
> splitAndSortCpPagesDuration=30ms, pages=40505, reason='timeout']
> [10:27:13,242][INFO][sys-stripe-7-#8][PageMemoryImpl] Throttling is 
applied

> to page modifications [percentOfPartTime=0.88, markDirty=2121 pages/sec,
> checkpointWrite=1219 pages/sec, estIdealMarkDirty=0 pages/sec,
> curDirty=0.00, maxDirty=0.02, avgParkTime=410172 ns, pages: (total=40505,
> evicted=0, written=10, synced=0, cpBufUsed=133, cpBufTotal=1554645)]
> [10:27:29,935][INFO][grid-timeout-worker-#30][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=214f3c2b, uptime=00:45:00.227]
> ^-- Cluster [hosts=45, CPUs=540, servers=25, clients=20, topVer=75,
> minorTopVer=0]
> ^-- Network [addrs=[127.0.0.1, 192.168.98.141], discoPort=47500,
> commPort=47100]
> ^-- CPU [CPUs=12, curLoad=3.67%, avgLoad=0.82%, GC=0%]
> ^-- Heap [used=5330MB, free=79.18%, comm=20480MB]
> ^-- Off-heap memory [used=1019MB, free=95.92%, allocated=24775MB]
> ^-- Page memory [pages=257976]
> ^-- sysMemPlc region [type=internal, persistence=true,
> lazyAlloc=false,
> ... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.99%,
> allocRam=99MB, allocTotal=0MB]
> ^-- default region [type=default, persistence=true, lazyAlloc=true,
> ... initCfg=24576MB, maxCfg=24576MB, usedRam=1018MB, freeRam=95.86%,
> allocRam=24576MB, allocTotal=3820MB]
> ^-- metastoreMemPlc region [type=internal, persistence=true,
> lazyAlloc=false,
> ... initCfg=40MB, maxCfg=100MB, usedRam=1MB, freeRam=98.78%,
> allocRam=0MB, allocTotal=1MB]
> ^-- TxLog region [type=internal, persistence=true, lazyAlloc=false,
> ... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
> allocRam=99MB, allocTotal=0MB]
> ^-- volatileDsMemPlc region [type=user, persistence=false,
> lazyAlloc=true,
> ... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
> allocRam=0MB]
> ^-- Ignite persistence [used=3821MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=7, qSize=0]
> ^-- Striped thread pool [active=0, idle=12, qSize=0]
> [10:27:38,261][INFO][db-checkpoint-thread-#230][Checkpointer] Checkpoint
> finished [cpId=11749dc0-fd0d-4b5f-8b9a-510e774fec38, pages=40505,
> markPos=WALPointer [idx=26, fileOff=385214751, len=16683],
> walSegmentsCovered=[], markDuration=47ms, pagesWrite=25018ms, fsync=6ms,
> total=25100ms]
>


Re: B+Tree exception causing node to crash

2022-04-16 Thread Ilya Korol

Hi,

There is some issue related to B+tree corruption that would be included 
in next release 2.13: https://issues.apache.org/jira/browse/IGNITE-15990


Does this scenario looks similar to your case?

If you have a working reproducer feel free to submit a new Jira ticket.

15.04.2022 22:04, naved.mu...@ril.com пишет:


Hi,

In one of our Ignite cluster’s node, we have got *B+Tree* exception 
due to which the node got stopped/disconnected from the cluster.


*Ignite Version :* 2.11.0.

Is this a product bug? If Yes then what is the workaround. Also has 
this been fixed in any version?


*Logs:*

*[2022-04-14T14:50:17,989][INFO 
][wal-file-archiver%EDIFMEDIA_PROD-#235%EDIFMEDIA_PROD%][FileWriteAheadLogManager] 
Copied file 
[src=/datastore2/wal/node00-060224e3-53ac-4555-876e-166a7480c142/.wal, 
dst=/datastore2/archive/node00-060224e3-53ac-4555-876e-166a7480c142/00211270.wal]*


*[2022-04-14T14:50:21,441][ERROR][sys-stripe-17-#18%EDIFMEDIA_PROD%][] 
Critical system error detected. Will be handled accordingly to 
configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, 
timeout=0, super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=class 
o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: 
B+Tree is corrupted [pages(groupId, pageId)=[IgniteBiTuple 
[val1=-291739550, val2=844420635435161]], cacheId=-291739550, 
cacheName=MapCustOTTCache, indexName=_key_PK, msg=Runtime failure on 
row: Row@5eaffcf8[ key: com.media.digitalapi.edif.model.MapCustOTTKey 
[idHash=1519412213, hash=2147463230, serviceType=Z0114, partyId=p123], 
val: com.media.digitalapi.edif.model.MapCustOTT [idHash=529725123, 
hash=-1059328655, serviceType=null, updatedBy=TB1KGQEC, 
updateddatetime=2022-04-14 14:50:21.437, partyId=null, 
subscriptionIdList=22414145021425514379098800228113357] ][  *


*org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException: 
B+Tree is corrupted [pages(groupId, pageId)=[**IgniteBiTuple 
[val1=-291739550, val2=844420635435161]], cacheId=-291739550, 
cacheName=MapCustOTTCache, indexName=_key_PK, msg=Runtime failure on 
row: Row@5eaffcf8[ key: com.media.digitalapi.edif.model.MapCustOTTKey 
[idHash=1519412213, hash=2147463230, serviceType=Z0114, partyId=p123], 
val: com.media.digitalapi.edif.model.MapCustOTT [idHash=529725123, 
hash=-1059328655, serviceType=null, updatedBy=TB1KGQEC, 
updateddatetime=2022-04-14 14:50:21.437, partyId=null, 
subscriptionIdList=22414145021425514379098800228113357] ][  ]]*


*    at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexTree.corruptedTreeException(InlineIndexTree.java:581) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2464) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2411) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexImpl.putx(InlineIndexImpl.java:269) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexImpl.onUpdate(InlineIndexImpl.java:251) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.cache.query.index.IndexProcessor.updateIndex(IndexProcessor.java:419) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.cache.query.index.IndexProcessor.updateIndexes(IndexProcessor.java:291) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.cache.query.index.IndexProcessor.store(IndexProcessor.java:142) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:2512) 
[ignite-core-2.11.0.jar:2.11.0]*


*   at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:2697) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1773) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1748) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:2794) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:441) 
[ignite-core-2.11.0.jar:2.11.0]*


*    at 
org.apache.ignite.internal.processo

RE: Re: Apache Ignite H2 Vulnerabilities

2022-04-12 Thread Ilya Korol

Hi Lokesh,

Updates for running Ignite over Java 17 is already in master. Please 
take a look: 
https://github.com/apache/ignite/blob/master/bin/include/jvmdefaults.sh


On 2022/04/12 10:11:57 Lokesh Bandaru wrote:
> You are fast. :) Was just typing a reply on top of the last one and yours
> is already here.
>
> Ignore the last question, found this,
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.13 .
> *Looking forward to this release. *
>
> *One slightly unrelated question, feel free to ignore. *
> *I know there is no support(or certified) for any version of Java greater
> than 11. *
> *What would it take for 2.13 to be able to run on Java17?*
>
> On Tue, Apr 12, 2022 at 3:36 PM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
> > Code freeze was yesterday. The target release date is 22 April.
> >
> > More here: Apache+Ignite+2.13
> > 
> >
> > On 12 Apr 2022, at 11:03, Lokesh Bandaru  wrote:
> >
> > Thanks for getting back, Stephen.
> > I am aware that Calcite is in the plans.
> > Any tentative timeline as to when 2.13(beta/ga) is going to be made
> > available?
> >
> > Regards.
> >
> > On Tue, Apr 12, 2022 at 2:15 PM Stephen Darlington <
> > stephen.darling...@gridgain.com> wrote:
> >
> >> The H2 project removed support for Ignite some time ago (
> >> https://github.com/h2database/h2database/pull/2227) which makes it
> >> difficult to move to newer versions.
> >>
> >> The next version of Ignite (2.13) has an alternative SQL engine 
(Apache

> >> Calcite) so over time there will be no need for H2.
> >>
> >> On 11 Apr 2022, at 20:34, Lokesh Bandaru  wrote:
> >>
> >> Resending.
> >>
> >> On Mon, Apr 11, 2022 at 6:42 PM Lokesh Bandaru 
> >> wrote:
> >>
> >>> Hello there, hi
> >>>
> >>> Writing to you with regards to the security 
vulnerabilities(particularly
> >>> the most recent ones, CVE-2022-xxx and CVE-2021-xxx) in the H2 
database and

> >>> the Apache Ignite's dependency on the flagged versions of H2.
> >>> There is an open issue tracking this,
> >>> https://issues.apache.org/jira/browse/IGNITE-16542, which doesn't 
seem

> >>> to have been fully addressed yet.
> >>> Have these problems been overcome already? Can you please advise?
> >>>
> >>> Thanks.
> >>>
> >>
> >>
> >
>


Re: Unable to change maxWalArchiveSize in ignite 2.11.1

2022-03-22 Thread Ilya Korol

Hi Surinder,

I guess that there was Integer overflow in expression #{2 * 1024 * 1024 
* 1024} so it was evaluated as -2147483648. Try to add 'L' to one of 
multipliers like:

#{2 * 1024 * 1024 * 1024L}

22.03.2022 23:14, Surinder Mehra пишет:

Hi,
We noticed that WalArchive size is going beyond the default max of 
1GB, so we tried to increase it in DataStorageConfiguration. But while 
starting the ignite node, it always throws the below exception. Could 
you please explain why. complete log in file attached.


Reason for change:
Starting to clean WAL archive [highIdx=992, currSize=2.2 GB, 
maxSize=1.0 GB]


change done:

            class="org.apache.ignite.configuration.DataStorageConfiguration">
                
                
                

                
                    class="org.apache.ignite.configuration.DataRegionConfiguration">

                        
                    
                

                
                value="/ignite/walarchive"/>

            
        


Error log while starting:
PropertyAccessException 1: 
org.springframework.beans.MethodInvocationException: Property 
'maxWalArchiveSize' threw exception; nested exception is 
java.lang.IllegalArgumentException: Ouch! Argument is invalid: Max WAL 
archive size can be only greater than 0 or must be equal to -1 (to be 
unlimited)]




Re: interactive shell

2022-01-18 Thread Ilya Korol
Hi. What about sqlline that is available out of the box? Is it not 
enough for you?


18.01.2022 17:57, Adriel Peng пишет:

Hi

Can Ignite have an interactive shell (like pyspark) for SQL similar 
operations in the future version?


Thanks


RE: HASH_JOIN: Index "HASH_JOIN_IDX" not found

2022-01-10 Thread Ilya Korol
I've checked latest master and didn't find any mentions of HASH_JOIN_IDX 
in it (except documentation sources). Meanwhile in gridgain community 
repository you can find HashJoinIndex class 
(https://github.com/gridgain/gridgain/blob/master/modules/h2/src/main/java/org/gridgain/internal/h2/index/HashJoinIndex.java) 
and other code like:


// Parser
private IndexHints parseIndexHints(Table table) {
    read(OPEN_PAREN);
    LinkedHashSet indexNames = new LinkedHashSet<>();
    if (!readIf(CLOSE_PAREN)) {
    do {
    String indexName = readIdentifierWithSchema();
    if (HashJoinIndex.HASH_JOIN_IDX.equalsIgnoreCase(indexName)) {
    indexNames.add(HashJoinIndex.HASH_JOIN_IDX);
    }
    else {
    Index index = table.getIndex(indexName);
    indexNames.add(index.getName());
    }
    } while (readIfMore(true));
    }
    return IndexHints.createUseIndexHints(indexNames);
}

So, looks like this feature implementation is absent in Ignite (or was 
removed for some reasons).


On 2022/01/06 09:34:41 38797715 wrote:
> Execute the following script and the error will occur:
>
> CREATE TABLE Country (
>   Code CHAR(3) PRIMARY KEY,
>   Name VARCHAR,
>   Continent VARCHAR,
>   Region VARCHAR,
>   SurfaceArea DECIMAL(10,2),
>   IndepYear SMALLINT,
>   Population INT,
>   LifeExpectancy DECIMAL(3,1),
>   GNP DECIMAL(10,2),
>   GNPOld DECIMAL(10,2),
>   LocalName VARCHAR,
>   GovernmentForm VARCHAR,
>   HeadOfState VARCHAR,
>   Capital INT,
>   Code2 CHAR(2)
> ) WITH "template=partitioned, backups=1, CACHE_NAME=Country,
> VALUE_TYPE=demo.model.Country";
>
> CREATE TABLE City (
>   ID INT,
>   Name VARCHAR,
>   CountryCode CHAR(3),
>   District VARCHAR,
>   Population INT,
>   PRIMARY KEY (ID, CountryCode)
> ) WITH "template=partitioned, backups=1, CACHE_NAME=City,
> KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";
>
>
> CREATE INDEX idx_country_code ON city (CountryCode);
>
> INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES
> (1,'Kabul','AFG','Kabol',178);
> INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES
> (2,'Qandahar','AFG','Qandahar',237500);
>
> INSERT INTO Country(Code, Name, Continent, Region, SurfaceArea,
> IndepYear, Population, LifeExpectancy, GNP, GNPOld, LocalName,
> GovernmentForm, HeadOfState, Capital, Code2) VALUES
> ('AFG','Afghanistan','Asia','Southern and Central
> 
Asia',652090.00,1919,2272,45.9,5976.00,NULL,'Afganistan/Afqanestan','Islamic 


> Emirate','Mohammad Omar',1,'AF');
>
> SELECT *
> FROM city,country USE INDEX(HASH_JOIN_IDX)
> WHERE city.CountryCode = country.code;
>
> see the doc:
>
> https://ignite.apache.org/docs/latest/SQL/distributed-joins#hash-joins
>
> why?
>
> is a bug?
>


Re: IgniteBiPredicate in ThinClient ScanQuery:java.lang.ClassNotFoundException

2021-12-10 Thread Ilya Korol
PeerDeploymentClassLoading should help to work around this issue, 
shouldn't it?


10.12.2021 20:56, Pavel Tupitsyn пишет:
Yes, the problem is that the predicate class "ThinClient$1" is not 
deployed on the server.


On Fri, Dec 10, 2021 at 1:24 PM 18624049226 <18624049...@163.com 
> wrote:


[16:44:49,031][SEVERE][client-connector-#72][ClientListenerNioListener]
Failed to process client request
[req=o.a.i.i.processors.platform.client.cache.ClientCach
eScanQueryRequest@1c37c4d2]
class org.apache.ignite.binary.BinaryInvalidTypeException:
ThinClient$1
   at

org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:717)
   at

org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
   at

org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1721)
   at

org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:820)
   at

org.apache.ignite.internal.binary.BinaryObjectImpl.deserialize(BinaryObjectImpl.java:661)
   at

org.apache.ignite.internal.processors.platform.client.cache.ClientCacheScanQueryRequest.createFilter(ClientCacheScanQueryRequest.java:117)
   at

org.apache.ignite.internal.processors.platform.client.cache.ClientCacheScanQueryRequest.process(ClientCacheScanQueryRequest.java:83)
   at

org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:94)
   at

org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:204)
   at

org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:55)
   at

org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
   at

org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
   at

org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
   at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
   at

org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
   at

java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   at

java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassNotFoundException: ThinClient$1
   at

java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
   at

java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
   at
java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
   at java.base/java.lang.Class.forName0(Native Method)
   at java.base/java.lang.Class.forName(Class.java:398)
   at
org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:9064)
   at
org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:9002)
   at

org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:376)
   at

org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:693)
   ... 17 more



在 2021/12/10 17:39, Pavel Tupitsyn 写道:

Thin client scan query supports any predicates.
However, make sure that predicate is deployed on all server nodes
so it can be executed.

Please share the exception details.

On Fri, Dec 10, 2021 at 11:53 AM 38797715 <38797...@qq.com
> wrote:

An exception will be throw when the following code is executed:

The ScanQuery in the thin client does not support
IgniteBiPredicate?


publicclassThinClient{
publicstaticvoidmain(String[] args) throwsClientException,
Exception{
ClientConfigurationcfg=
newClientConfiguration().setAddresses("localhost:10800");
try(IgniteClientclient= Ignition.startClient(cfg)) {
ClientCache cache2=
client.getOrCreateCache("cache2");
for(inti= 1; i<= 10; i++) {
cache2.put(i, newPerson((long)i, "a", "b"));
}
ClientCache cache3=
client.getOrCreateCache("cache2").withKeepBinary();
IgniteBiPredicate filter=
newIgniteBiPredicate() {
@Overridepublicbooleanapply(BinaryObjectkey,
BinaryObjectperson) {
returnperson.field("id") > 6;
}
};
try(QueryCursor> cur3=
cache3.query(newScanQuery<>(filter))) {
for(Cache.Entr

RE: CVE-2021-2816[3,4,5] vulnerabilities and Ignite 2.8.1

2021-11-03 Thread Ilya Korol
I guess so. If vulnerability present in particular JAR by excluding this 
JAR from classpath, CVE should not affect you environment.



On 2021/10/27 12:58:12 Dana Milan wrote:
> Hi,
> .
> Can anyone provide a clarification of how jetty is being used by Ignite
> 2.8.1 and whether there is another way to avoid its vulnerabilities when
> using Ignite besides upgrading to a newer Ignite version?
> To be more specific, if I don't enable REST API (by not moving
> ignite-rest-http from libs/optional to libs directory), will it eliminate
> these vulnerabilities from my Ignite node?
>
> Thanks a lot,
> Dana
>


RE: frequent "Failed to shutdown socket" warnings in 2.11.0

2021-11-02 Thread Ilya Korol

Hi MJ.

This is a bug, i've created a jira for it: 
https://issues.apache.org/jira/browse/IGNITE-15867


However this should not affect ignite workflow, since it just produces 
excessive stack traces and nothing more.



On 2021/11/03 03:27:42 "MJ" wrote:
> Hi,
>
>
>
>
>  
>
> I experienced frequent "Failed to shutdown socket" warnings (see 
below) in testing ignite 2.11.0.

>
> Compared org.apache.ignite.internal.util.IgniteUtils 2.11.0 
(line:4227) vs 2.10.0 ,  the method( public static void 
close(@Nullable Socket sock, @Nullable IgniteLogger log) ) is newly 
added. Not sure if these methods close/closeQuiet ’s related to 
IGNITE_QUIET setting but it’s always throwing below warnings no matter 
IGNITE_QUIET is true or false .

>
>  
>
> And yes those warnings can be suppressed by setting logging level to 
ERROR. I am not sure if that’s the issue in my code or just flaw in 
ignite. But is there any other way to gracefully clean up them by coding 
?  Those exception stacktraces are not good for production usage.

>
>  
>
>  
>
> 2021-11-03 08:33:05,745 [ WARN] 
[grid-nio-worker-client-listener-2-#34%ignitePoc_primary%] 
org.apache.ignite.internal.processors.odbc.ClientListenerProcessor - 
Failed to shutdown socket: null

>
> java.nio.channels.ClosedChannelException
>
>     at 
sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:797)

>
>     at 
sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:407)

>
>     at 
org.apache.ignite.internal.util.IgniteUtils.close(IgniteUtils.java:4231)

>
>     at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.closeKey(GridNioServer.java:2784)

>
>     at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:2835)

>
>     at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:2794)

>
>     at 
org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:1357)

>
>     at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2508)

>
>     at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2273)

>
>     at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1910)

>
>     at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)

>
>     at 
java.lang.Thread.run(Thread.java:748)    

>
>  
>
>  
>
>  
>
> Thanks,
>
> MJ


RE: Not able to set off heap max memory for Ignite queues

2021-11-02 Thread Ilya Korol

Hi Bharath.

I've checked the sources and looks like method 
CollectionConfiguration.getOffHeapMaxMemory() is never called, so the 
only way to restrict queue capacity is to use capacity parameter of 
ignite.queue().


One can argue about it, but this looks quite natural to restrict queue 
by elements count instead of memory, however feel free to submit a JIRA 
ticket if you consider this as a flaw.



On 2021/11/01 22:28:27 Bharath Kumar Suresh wrote:
> Hello,
>
> I'm trying to set a maximum limit on the memory (not in terms of the 
number

> of elements) of an ignite queue using
> CollectionConfiguration::setOffHeapMaxMemory
> 
.

> The snippet below is how I try to achieve the same:
>
> CollectionConfiguration queueConfiguration = new
> CollectionConfiguration();
>
> 
queueConfiguration.setOffHeapMaxMemory(DataSize.ofMegabytes(2L).toBytes());

> ignite.queue("test-queue", 0, queueConfiguration);
>
> As seen above, I set the cap parameter to 0 since I want to define 
the size

> of the queue in terms of actual memory (2MB in the above example) and not
> the number of elements in the queue.
>
> But somehow this setting doesn’t restrict the actual size of the queue.
> What am I missing? Could someone please help?
>
> Thanks,
> Bharath
>


RE: Ignite Cache operations get stuck when multiple thin clients try to perform CRUD operations in parallel with partition map exchange

2021-11-02 Thread Ilya Korol

Hi Sumit,

What is Ignite version that you use?
AFAIK partition map exchange is a king of "stop the world" actiity for 
the cluster, so any other actions with cluster (like cache creation) 
would be suspended until PME end.
If all of your clients concurrently try to create same cache it's OK 
that there are some rolledback transactions, but if your cluster became 
unresponsive after that this looks like a bug, so you can submit a JIRA 
ticket, with steps to reproduce this issue.


But if you experience a deadlock thic looks like a bug.

On 2021/11/02 13:45:49 Sumit Deshinge wrote:
> Hi,
>
> I have apache ignite cluster of 3 ignite server and more than 20 ignite
> thin clients (each thin client being on separate VM). These thin clients
> are trying to create caches at approximately the same time parallely and
> also starting with cache CRUD operations after that.
>
> Looks like partition map exchange process and cache CRUD operations in
> parallel are causing deadlock or lock acquire failures.
>
> *What should be the strategy to handle this scenario ?*
>
> Ignite server has below errors:
>
> *Exception stack trace 1:*
>
> WARNING: Dumping the near node thread that started transaction
> [xidVer=GridCacheVersion [topVer=247332659, order=1635852705217,
> nodeOrder=1], nodeId=2735bef0-7404-41e3-843f-7043490c9d84]
> Stack trace of the transaction owner thread:
> Thread [name="client-connector-#56%perf-dn1%", id=93, state=WAITING,
> blockCnt=5023, waitCnt=36165]
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
> at o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
> at 
o.a.i.i.processors.cache.GridCacheAdapter$41.op(GridCacheAdapter.java:3430)
> at 
o.a.i.i.processors.cache.GridCacheAdapter$41.op(GridCacheAdapter.java:3423)
> at 
o.a.i.i.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4480)
> at 
o.a.i.i.processors.cache.GridCacheAdapter.remove0(GridCacheAdapter.java:3423)
> at 
o.a.i.i.processors.cache.GridCacheAdapter.remove(GridCacheAdapter.java:3405)
> at 
o.a.i.i.processors.cache.GridCacheAdapter.remove(GridCacheAdapter.java:3388)
> at 
o.a.i.i.processors.cache.IgniteCacheProxyImpl.remove(IgniteCacheProxyImpl.java:1438)
> at 
o.a.i.i.processors.cache.GatewayProtectedCacheProxy.remove(GatewayProtectedCacheProxy.java:964)
> at 
o.a.i.i.processors.platform.client.cache.ClientCacheRemoveKeyRequest.process(ClientCacheRemoveKeyRequest.java:41)
> at 
o.a.i.i.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:77)
> at 
o.a.i.i.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:204)
> at 
o.a.i.i.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:55)
> at 
o.a.i.i.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at 
o.a.i.i.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at 
o.a.i.i.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)

> at o.a.i.i.util.worker.GridWorker.run(GridWorker.java:120)
> at o.a.i.i.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
> at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

> at java.lang.Thread.run(Thread.java:748)
>
> *Exception stack trace 2:*
>
> WARNING: >>> Transaction [startTime=11:39:27.214,
> curTime=11:40:36.277, systemTime=0, userTime=69063, tx=GridNearTxLocal
> [mappings=IgniteTxMappingsImpl [], nearLocallyMapped=false,
> colocatedLocallyMapped=false, needCheckBackup=null,
> hasRemoteLocks=false, trackTimeout=false, systemTime=44700,
> systemStartTime=0, prepareStartTime=0, prepareTime=0,
> commitOrRollbackStartTime=0, commitOrRollbackTime=0, lb=null,
> mvccOp=null, qryId=-1, crdVer=0,
> thread=client-connector-#57%perf-dn1%, mappings=IgniteTxMappingsImpl
> [], super=GridDhtTxLocalAdapter [nearOnOriginatingNode=false,
> span=o.a.i.i.processors.tracing.NoopSpan@4a931268,
> nearNodes=KeySetView [], dhtNodes=KeySetView [], explicitLock=false,
> super=IgniteTxLocalAdapter [completedBase=null,
> sndTransformedVals=false, depEnabled=false, txState=IgniteTxStateImpl
> [activeCacheIds=[], recovery=null, mvccEnabled=null,
> mvccCachingCacheIds=[], txMap=EmptySet []], super=IgniteTxAdapter
> [xidVer=GridCacheVersion [topVer=247332659, order=1635852705226,
> nodeOrder=1], writeVer=null, implicit=false, loc=true, threadId=95,
> startTime=1635853167214, nodeId=2735bef0-7404-41e3-843f-7043490c9d84,
> isolation=REPEATABLE_READ, concurrency=PESSIMISTIC, timeout=0,
> sysInvalidate=false, sys=false, plc=2, commitVer=null,
> finalizing=NONE, invalidParts=null, state=SUSPENDED, timedOut=false,
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0],
> mvccSnapshot=null, skipCompletedVers=false, pare

Re: apache ignite 2.10.0 heap starvation

2021-09-27 Thread Ilya Korol
Actually Query interface doesn't define close() method, but QueryCursor 
does.
In your snippets you're using try-with-resource construction for SELECT 
queries which is good, but when you run MERGE INTO query you would also 
get an QueryCursor as a result of


igniteCacheService.getCache(ID, IgniteCacheType.LABEL).query(insertQuery);

so maybe this QueryCursor objects still hold some resources/memory. 
Javadoc for QueryCursor states that you should always close cursors.


To simplify cursor closing there is a cursor.getAll() method that will 
do this for you under the hood.



On 2021/09/13 06:17:21, Ibrahim Altun  wrote:
> Hi Ilya,>
>
> since this is production environment i could not risk to take heap 
dump for now, but i will try to convince my superiors to get one and 
analyze it.>

>
> Queries are heavily used in our system but aren't they autoclosable 
objects? do we have to close them anyway?>

>
> here are some usage examples on our system;>
> --insert query is like this; MERGE INTO "ProductLabel" ("productId", 
"label", "language") VALUES (?, ?, ?)>
> igniteCacheService.getCache(ID, 
IgniteCacheType.LABEL).query(insertQuery);>

>
> another usage example;>
> --sqlFieldsQuery is like this; >
> String sql = "SELECT _val FROM \"UserRecord\" WHERE \"email\" IN (?)";>
> SqlFieldsQuery sqlFieldsQuery = new SqlFieldsQuery(sql);>
> sqlFieldsQuery.setLazy(true);>
> sqlFieldsQuery.setArgs(emails.toArray());>
>
> try (QueryCursor> ignored = igniteCacheService.getCache(ID, 
IgniteCacheType.USER).query(sqlFieldsQuery)) {...}>

>
>
>
> On 2021/09/12 20:28:09, Shishkov Ilya  wrote: >
> > Hi, Ibrahim!>
> > Have you analyzed the heap dump of the server node JVMs?>
> > In case your application executes queries are their cursors closed?>
> > >
> > пт, 10 сент. 2021 г. в 11:54, Ibrahim Altun :>
> > >
> > > Igniters any comment on this issue, we are facing huge GC 
problems on>

> > > production environment, please advise.>
> > >>
> > > On 2021/09/07 14:11:09, Ibrahim Altun >
> > > wrote:>
> > > > Hi,>
> > > >>
> > > > totally 400 - 600K reads/writes/updates>
> > > > 12core>
> > > > 64GB RAM>
> > > > no iowait>
> > > > 10 nodes>
> > > >>
> > > > On 2021/09/07 12:51:28, Piotr Jagielski  wrote:>
> > > > > Hi,>
> > > > > Can you provide some information on how you use the cluster? 
How many>
> > > reads/writes/updates per second? Also CPU / RAM spec of cluster 
nodes?>

> > > > >>
> > > > > We observed full GC / CPU load / OOM killer when loading big 
amount of>
> > > data (15 mln records, data streamer + allowOverwrite=true). We've 
seen>
> > > 200-400k updates per sec on JMX metrics, but load up to 10 on 
nodes, iowait>
> > > to 30%. Our cluster is 3 x 4CPU, 16GB RAM (already upgradingto 
8CPU, 32GB>

> > > RAM). Ignite 2.10>
> > > > >>
> > > > > Regards,>
> > > > > Piotr>
> > > > >>
> > > > > On 2021/09/02 08:36:07, Ibrahim Altun >
> > > wrote:>
> > > > > > After upgrading from 2.7.1 version to 2.10.0 version ignite 
nodes>

> > > facing>
> > > > > > huge full GC operations after 24-36 hours after node start.>
> > > > > >>
> > > > > > We try to increase heap size but no luck, here is the start>
> > > configuration>
> > > > > > for nodes;>
> > > > > >>
> > > > > > JVM_OPTS="$JVM_OPTS -Xms12g -Xmx12g -server>
> > > > > >>
> > > 
-javaagent:/etc/prometheus/jmx_prometheus_javaagent-0.14.0.jar=8090:/etc/prometheus/jmx.yml> 


> > > > > > -Dcom.sun.management.jmxremote>
> > > > > > -Dcom.sun.management.jmxremote.authenticate=false>
> > > > > > -Dcom.sun.management.jmxremote.port=49165>
> > > > > > -Dcom.sun.management.jmxremote.host=localhost>
> > > > > > -XX:MaxMetaspaceSize=256m -XX:MaxDirectMemorySize=1g>
> > > > > > -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true>
> > > > > > -DIGNITE_WAL_MMAP=true 
-DIGNITE_BPLUS_TREE_LOCK_RETRIES=10>

> > > > > > -Djava.net.preferIPv4Stack=true">
> > > > > >>
> > > > > > JVM_OPTS="$JVM_OPTS -XX:+AlwaysPreTouch -XX:+UseG1GC>
> > > > > > -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC>
> > > > > > -XX:+UseStringDeduplication 
-Xloggc:/var/log/apache-ignite/gc.log>

> > > > > > -XX:+PrintGCDetails -XX:+PrintGCDateStamps>
> > > > > > -XX:+PrintTenuringDistribution -XX:+PrintGCCause>
> > > > > > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10>
> > > > > > -XX:GCLogFileSize=100M">
> > > > > >>
> > > > > > here is the 80 hours of GC analyize report:>
> > > > > >>
> > > 
https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjEvMDgvMzEvLS1nYy5sb2cuMC5jdXJyZW50LnppcC0tNS01MS0yOQ==&channel=WEB> 


> > > > > >>
> > > > > > do we need more heap size or is there a BUG that we need to 
be aware?>

> > > > > >>
> > > > > > here is the node configuration:>
> > > > > >>
> > > > > > >
> > > > > > http://www.springframework.org/schema/beans";>
> > > > > > xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
> > > > > > xsi:schemaLocation=">
> > > > > > http://www.springframework.org/schema/beans>
> > > > > > 
http://www.springframework.org/schema/beans/spring-beans.xsd";>>

> > > > > > 
> > > >

Re: Questions about baseline auto-just

2021-09-27 Thread Ilya Korol

Hi,

It looks like baseline topology auto-adjustment is in process. Did you 
checked cluster state later?
Also to track auto-adjustment events please check your logs for messages 
like:


Baseline auto-adjust will be executed right now ...
Baseline auto-adjust will be executed in ...
Baseline auto adjust data is expired (will not be scheduled) [data= ...
Baseline auto adjust data is targeted to obsolete version (will not be 
scheduled) ...

New baseline timeout object was ( successfully scheduled / rejected ) ...

On 2021/09/26 10:17:24, xin  wrote:
>
> Hi All,>
>    We are using Ignite 2.10.0 and we have a question baseline 
auto-just.>

>
> As the picture shows,cluster state is active,baseline auto adjustment 
enabled,softTimeout=1>

>
>
> When a node leaves the cluster for 10 seconds, BaselineTopology 
didn't change,no node joins or leaves during this period.>

> Why the parameter does not take effect?>
> ->
>
> Thanks & Regards,>
> Xin Chang>
>
>
> 从 Windows 版邮件发送>
>
>


Re: SqlOnheapCache failure

2021-09-24 Thread Ilya Korol

AFAIK there were not any significant changes related to this feature.

Also note that but default there is no restriction for size of on-heap 
memory that would be used for sql caching, so try to add some tunings 
like increasing heap memory or specifying number of cached rows via 
*setSqlOnheapCacheMaxSize()*.
But i should agree that even without this options Ignite process should 
not crash, so your case may be considered as a bug. It would be great if 
you post a Jira ticket with your reproducer, stacktraces and other 
related info.


On 2021/09/23 21:37:12, Hamed Zahedifar  wrote:
> Hello Everyone, >
>
> I was see a process exit when enabling SqlOnheapCache 
(setSqlOnheapCacheEnabled to true) in recent update 2.11, I did not 
found a mention or any resource for possible change in API so probably 
it is bug ?>

>
> Cheers>
>


Re: node not detecting other nodes in baseline topology

2021-08-12 Thread Ilya Korol

Hi,

Are you running ignite every of nodes on dedicated machines? When you 
restart ignite node do you also restart whole machine? Is there any kind 
of DHCP? Could you also share more logs from 11th node and from other 
server node.


On 2021/08/12 05:57:31, BEELA GAYATRI  wrote:
> Dear Team,>
>
> we have 15 server nodes which are in baseline topology. We have 
restarted the nodes, when one of the server nodes(11th node )is 
restarted it is showing 14 nodes left for auto activation as shown below 
,even though other nodes are already restarted. After that we have 
stopped all the 15 nodes and restarted the 11th node first and rest all 
other nodes later , then it started recognizing all the other nodes.>
> Please suggest what might be the cause , and this scenario is 
occurring randomly.>

>
> PFA configuration>
>
> 
[17:01:05,539][INFO][disco-event-worker-#40%WORKER%][GridDiscoveryManager] 
^-- Baseline [id=0, size=15, online=1, offline=14]>
> 
[17:01:05,539][INFO][disco-event-worker-#40%MATCHERWORKER%][GridDiscoveryManager] 
^-- 14 nodes left for auto-activation>
> 
[17:01:05,619][INFO][tcp-disco-sock-reader-#5%WORKER%][TcpDiscoverySpi] 
Finished serving remote node connection [rmtAddr=/10.10.180.141:38055, 
rmtPort=38055>

>
> =-=-=>
> Notice: The information contained in this e-mail>
> message and/or attachments to it may contain >
> confidential or privileged information. If you are >
> not the intended recipient, any dissemination, use, >
> review, distribution, printing or copying of the >
> information contained in this e-mail message >
> and/or attachments to it are strictly prohibited. If >
> you have received this communication in error, >
> please notify us by reply e-mail or telephone and >
> immediately and permanently delete the message >
> and any attachments. Thank you>
>
>
>


Re: On-demand replication

2021-08-11 Thread Ilya Korol
I guess that you have an option to move whole PDS (if you enable it) via 
SSH from one cluster to another, but this approach could be tricky 
because caches partitions distribution may not be the same on different 
clusters.




On 2021/07/29 16:55:28,  wrote:
> Hello,>
>
> What's the best way to manually (on-demand) replicate data between 2 
clusters?>

>
> The GridGain replication (and its full-state cache transfers) is not 
really usable because it starts continuous replication immediately 
(cannot be prevented) and cannot be stopped, only paused where health 
check is still going on against the replica cluster and sender nodes are 
still accumulating data to storage (unless replication is stopped for 
all caches manually).>

>
> Regards,>
> Michal>
>
> 
_> 

> �This message is for information purposes only, it is not a 
recommendation, advice, offer or solicitation to buy or sell a product 
or service nor an official confirmation of any transaction. It is 
directed at persons who are professionals and is not intended for retail 
customer use. Intended for recipient only. This message is subject to 
the terms at: www.barclays.com/emaildisclaimer.>

>
> For important disclosures, please see: 
www.barclays.com/salesandtradingdisclaimer regarding market commentary 
from Barclays Sales and/or Trading, who are active market participants; 
https://www.investmentbank.barclays.com/disclosures/barclays-global-markets-disclosures.html 
regarding our standard terms for the Investment Bank of Barclays where 
we trade with you in principal-to-principal wholesale markets 
transactions; and in respect of Barclays Research, including disclosures 
relating to specific issuers, please see 
http://publicresearch.barclays.com.� >
> 
_> 

> If you are incorporated or operating in Australia, please see 
https://www.home.barclays/disclosures/importantapacdisclosures.html for 
important disclosure.>
> 
_> 

> How we use personal information see our privacy notice 
https://www.investmentbank.barclays.com/disclosures/personalinformationuse.html 
>
> 
_> 


>


Re: Apache Ignite Repo 403 Forbidden

2021-08-11 Thread Ilya Korol

Hi,

Apache moved Ignite's repositories here:
https://apache.jfrog.io/artifactory/ignite-deb/
https://apache.jfrog.io/artifactory/ignite-rpm/

https://stackoverflow.com/questions/68493785/ignite-deb-package-mirrors



On 2021/07/22 20:53:55, Mustafa Sunka  wrote:
> looking more into this, it appears to be due to the bintray sunset? 
Are there updated instructions on how to install Ignite via a deb repo? 
The below instructions are out of date and 
http://apache.org/dist/ignite/deb/ redirects to bintray.>

>
> https://ignite.apache.org/docs/2.9.0/installation/deb-rpm>
>
> On 2021/07/22 19:20:13, Mustafa Sunka  wrote: >
> > Posted this in @dev but it looks like this might the right list for 
errors.>

> > >
> > This looks very similar to an issue from last year:>
> > 
http://apache-ignite-users.70518.x6.nabble.com/Apache-Ignite-downloads-are-redirecting-from-https-to-http-td31428.html> 


> > >
> > When I attempt to run `apt-get update` on Ubuntu 18.04 with the 
Apache Ignite repository in /etc/apt/sources.list, the update fails due 
to an https to http redirect. Here's the output from stdout:>

> > >
> > >
> > WARNING: apt does not have a stable CLI interface. Use with caution 
in scripts.>

> > >
> > Hit:1 http://azure.archive.ubuntu.com/ubuntu bionic InRelease>
> > Get:2 http://azure.archive.ubuntu.com/ubuntu bionic-updates 
InRelease [88.7 kB]>
> > Get:4 http://azure.archive.ubuntu.com/ubuntu bionic-backports 
InRelease [74.6 kB]>

> > Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease>
> > Hit:6 https://packages.microsoft.com/ubuntu/18.04/prod bionic 
InRelease>
> > Err:3 https://dl.bintray.com/apache/ignite-deb apache-ignite 
InRelease>

> > 403 Forbidden [IP: 52.38.32.109 443]>
> > Reading package lists...>
> > E: Failed to fetch 
http://apache.org/dist/ignite/deb/dists/apache-ignite/InRelease 403 
Forbidden [IP: 52.38.32.109 443]>
> > E: The repository 'http://apache.org/dist/ignite/deb apache-ignite 
InRelease' is not signed.>

> > >
>


Re: "Failed to update keys" error

2021-08-10 Thread Ilya Korol

Hi,

Looks like following lines of error stack trace says that the issue 
really may be related to class loader:


Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to deploy 
class for local deployment 
[clsName=org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor$ModifyingEntryProcessor,
 ldr=io.iec.edp.caf.app.manager.classloader.CAFClassLoader@544fa968]
at 
org.apache.ignite.internal.processors.cache.GridCacheDeploymentManager.registerClass(GridCacheDeploymentManager.java:666)

Try to debug GridCacheDeploymentmanager.registerClass() method. Maybe 
you will get some insights about why this happens.




On 2021/07/19 12:48:02, y  wrote:
> Hi Igniters:>
> I have a very strange problem. I have two code environment:One is the 
simplest program (a "Hello world" program),can execute SQL statements 
correctly。Another code environment is the formal system, which use 
Spring Boot and a custom class loader named CAFClassLoader. The DML 
statement cannot be executed on formal system , but the DQL statement 
can be executed (‘select ..’ is OK ). Part of error messages are as 
follows.>

>
>
> What makes me confused is that the code of the two environments is 
the same! I really can't figure out why. Does anyone know why? Attached 
is the code and complete error information.>

>
>
> Error message>
> Caused by: java.sql.SQLException: Failed to update keys (retry update 
if possible).: [2]>
> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.processPage(DmlBatchSender.java:248)> 

> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.sendBatch(DmlBatchSender.java:196)> 

> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.add(DmlBatchSender.java:124)> 

> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.doUpdate(DmlUtils.java:255)> 


> ... 143 more>
> Caused by: class 
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed 
to update keys (retry update if possible).: [2]>
> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.doUpdate(DmlUtils.java:280)> 

> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.processSelectResult(DmlUtils.java:171)> 

> at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateNonTransactional(IgniteH2Indexing.java:2899)> 

> at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdate(IgniteH2Indexing.java:2753)> 

> at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateDistributed(IgniteH2Indexing.java:2683)> 

> at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeDml(IgniteH2Indexing.java:1186)> 

> at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1112)> 

> at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2779)> 

> at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2775)> 

> at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)> 

> at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3338)> 

> at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$2(GridQueryProcessor.java:2795)> 

> at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2833)> 

> at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2769)> 

> at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2696)> 

> at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:819)> 


> ... 128 more>
> Caused by: java.sql.SQLException: Failed to update keys (retry update 
if possible).: [2]>
> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.processPage(DmlBatchSender.java:248)> 

> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.sendBatch(DmlBatchSender.java:196)> 

> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.add(DmlBatchSender.java:124)> 

> at 
org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.doUpdate(DmlUtils.java:255)> 


> ... 143 more>
> Caused by: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [2]>
> at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)> 

> at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.onPrimaryResponse(GridNearAtomicUpdateFuture.java:413)> 

> at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdat

Re: CacheInvalidStateException | all partition owners have left the grid, partition data has been lost

2021-08-10 Thread Ilya Korol

Hi,

Which Ignite version do you use? Have you enabled any optional ignite libs? Am 
I right that you're trying to call Ignite REST API? Do you know call to which 
endpoint produces described error? Which client do you use for calling this 
endpoint? Can you repeat this call via other tools like CURL and check that 
error is reproducible? Do you use some kind of SSL termination for ignite REST 
API?

Also it looks like you cluster is also facing some network/discovery issues (because of 
"all partition owners have left the grid, partition data has been lost" 
messages). Are you sure that your network is OK?

PS. I didn't find any tomcat usage in ignite repo (at least in current master), 
so I'm struggling to imagine how can this issue may appear in ignite, so please 
give us more details of your Ignite environment.



On 2021/07/20 21:42:58,  wrote:
> We have a 3 node cluster with persistence enabled, sometimes I see 
below error happening.>

>
> Ignite Server log>
>
> 2021-07-19 21:12:13,650 INFO [http-nio-9050-exec-1] 
o.a.c.h.Http11Processor [DirectJDKLog.java:175] Error parsing HTTP 
request header>
> Note: further occurrences of HTTP request parsing errors will be 
logged at DEBUG level.>
> java.lang.IllegalArgumentException: Invalid character found in method 
name 
[0x160x030x030x010xb40x010x000x010xb00x030x030x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x01p0x000x010x000x020x000x030x000x040x000x050x000x060x000x070x000x080x000x090x000x0a0x000x0b0x000x0c0x000x0d0x000x0e0x000x0f0x000x100x000x110x000x120x000x130x000x140x000x150x000x160x000x170x000x180x000x190x000x1a0x000x1b0x00/0xx0010x0020x0030x0040x0050x0060x0070x0080x0090x00:0x00;0x00<0x00=0x00>0x00?0x00@0x00g0x00h0x00i0x00j0x00k0x00l0x00m0x00A0x00B0x00C0x00D0x00E0x00F0x000x840x000x850x000x860x000x870x000x880x000x890x000xba0x000xbb0x000xbc0x000xbd0x000xbe0x000xbf0x000xc00x000xc10x000xc20x000xc30x000xc40x000xc50x000x9c0x000x9d0x000x9e0x000x9f0x000xa00x000xa10x000xa20x000xa30x000xa40x000xa50x000xa60x000xa70xc00x010xc00x020xc00x030xc00x040xc00x050xc00x060xc00x070xc00x080xc00x090xc00x0a0xc00x0b0xc00x0c0xc00x0d0xc00x0e0xc00x0f0xc00x100xc00x110xc00x120xc00x130xc00x140xc00x150xc00x160xc00x170xc00x180xc00x190xc0#0xc0$0xc0%0xc0&0xc0'0xc0(0xc0)0xc0*0xc0+0xc0,0xc0-0xc0.0xc0/0xc010xc000xc020xc0s0xc0r0xc0t0xc0u0xc0v0xc0w0xc0x0xc0z0xc0y0xc0{0xc0|0xc0}0xc0~0xc00x7f0xc00x800xc00x810xc00x820xc00x830xc00x840xc00x850xc00x860xc00x870xc00x880xc00x890xc00x8a0xc00x8b0xc00x8c0xc00x8d0xc00x8e0xc00x8f0xc00x900xc00x910xc00x920xc00x930xc00x940xc00x950xc00x960xc00x970xc00x980xc00x990xc00x9a0xc00x9b0x000x960x000x970x000x980x000x990x000x9a0x000x9b0xcc0xa80xcc0xa90xcc0xaa0xcc0xab0xcc0xac0xcc0xad0xcc0xae0x020x000x010x000x160x000x0a0x000x0a0x000x080x000x170x000x190x000x180x000x160x000x0b0x000x040x030x000x01...]. 
HTTP method names must be tokens>
> at 
org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:418)> 

> at 
org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:260)>
> at 
org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)> 

> at 
org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)> 

> at 
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590)> 

> at 
org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)> 

> at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)> 

> at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)> 

> at 
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)> 


> at java.lang.Thread.run(Thread.java:748)>
> 2021-07-19 21:12:14,114 WARN 
[grid-nio-worker-tcp-rest-0-#38%blade-cache%] 
o.a.i.i.p.r.p.t.GridTcpRestProtocol [JavaLogger.java:295] Client 
disconnected abruptly due to network connection loss or because the 
connection was left open on application shutdown. [cls=class 
o.a.i.i.util.nio.GridNioException, msg=Failed to parse incoming packet 
(invalid packet start) [ses=GridSelectorNioSessionImpl 
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0 
lim=441 cap=8192], super=AbstractNioClientWorker [idx=0, bytesRcvd=0, 
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker 
[name=grid-nio-worker-tcp-rest-0, igniteInstanceName=blade-cache, 
finished=false, heartbeatTs=1626743534110, hashCode=507494530, 
interrupted=false, 
runner=grid-nio-worker-tcp-rest-0-#38%blade-cache%]]], writeBuf=null, 
readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, 
outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl 
[locAddr=/10.148.213.242:11211, rmtAddr=/10.148.213.242:54330, 
createTime=1626743534110, closeTime=0, bytesSent=0, bytesRcvd=441, 
bytesSent0=0, bytesRcvd0=441, sndSchedTime=1626743534110, 
lastSndTime=1626743534110, lastRcvTime=1626743534110, read

Re: CacheInvalidStateException | all partition owners have left the grid, partition data has been lost

2021-08-10 Thread Ilya Korol

Hi,

Which Ignite version do you use? Have you enabled any optional ignite libs? Am 
I right that you're trying to call Ignite REST API? Do you know call to which 
endpoint produces described error? Which client do you use for calling this 
endpoint? Can you repeat this call via other tools like CURL and check that 
error is reproducible? Do you use some kind of SSL termination for ignite REST 
API?

Also it looks like you cluster is also facing some network/discovery issues (because of 
"all partition owners have left the grid, partition data has been lost" 
messages). Are you sure that your network is OK?

PS. I didn't find any tomcat usage in ignite repo (at least in current master), 
so I'm struggling to imagine how can this issue may appear in ignite, so please 
give us more details of your Ignite environment.



Re: Ignite Sink Connector in Apache Ignite 2.10.0

2021-08-10 Thread Ilya Korol

Hi Shubham Shirur,

Afaik kafka module was moved to separate repo 
https://github.com/apache/ignite-extensions, and now it released 
independently.


Concerning missing kafka optional module in ignite distribution:
Looks like existing docs is a bit outdated and should be fixed. Because 
neither ignite-kafka or ignite-kafka-ext modules shipped with ignite 
distribution.
Anyway if you want to use sink connector you should provide required 
jars to kafka classpath, so you can extract all required modules from 
ignite distribution except ignite-kafka-ext (this one you can get 
separately from maven repository 
https://mvnrepository.com/artifact/org.apache.ignite/ignite-kafka-ext/1.0.0)


Also check out this docs: 
https://github.com/apache/ignite-extensions/tree/master/modules/kafka-ext




11.08.2021 03:39, Shubham Shirur пишет:

Hi,

I am curious as to why I am unable to find the ignite-kafka folder 
under libs/optional  of Apache Ignite 2.10.0 binary release. I want to 
use a kafka ignite sink connector.


Can you please help me with this?

Thank you,
Shubham


Re: java.lang.IllegalStateException: Cache doesn't exist

2021-06-17 Thread Ilya Korol

Hi, Vladislav.

Did you also deployed appropriate cache configuration along with ignite 
server? See examples in ignite docs or ignite repo about how to 
define/configure caches.



17.06.2021 02:21, Vladislav Shipugin пишет:
I started the ignite-server in Kubernetes and am trying to connect the 
ignite-client to it.


On Ignite-client I use SpringBoot application with igniteDataStreamer 
and KafkaStreamer with start onLifecycleEvent AFTER_NODE_START.


And I have this problem:
org.apache.ignite.IgniteException: Failed to start IgniteSpringBean
at 
org.apache.ignite.IgniteSpringBean.afterSingletonsInstantiated(IgniteSpringBean.java:177) 
~[ignite-spring-2.9.1.jar!/:2.9.1]
at 
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:963) 
~[spring-beans-5.3.6.jar!/:5.3.6]
at 
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918) 
~[spring-context-5.3.6.jar!/:5.3.6]
at 
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) 
~[spring-context-5.3.6.jar!/:5.3.6]
at 
org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:144) 
~[spring-boot-2.4.5.jar!/:2.4.5]
at 
org.springframework.boot.SpringApplication.refresh(SpringApplication.java:782) 
[spring-boot-2.4.5.jar!/:2.4.5]
at 
org.springframework.boot.SpringApplication.refresh(SpringApplication.java:774) 
[spring-boot-2.4.5.jar!/:2.4.5]
at 
org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:439) 
[spring-boot-2.4.5.jar!/:2.4.5]
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:339) 
[spring-boot-2.4.5.jar!/:2.4.5]
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:1340) 
[spring-boot-2.4.5.jar!/:2.4.5]
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:1329) 
[spring-boot-2.4.5.jar!/:2.4.5]
at ru.shipa.ignite.persistence.ApplicationKt.main(Application.kt:55) 
[classes!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[na:1.8.0_212]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[na:1.8.0_212]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
~[na:1.8.0_212]

at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_212]
at 
org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) 
[app.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) 
[app.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) 
[app.jar:na]
at 
org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) 
[app.jar:na]
Caused by: org.apache.ignite.IgniteCheckedException: Cache doesn't 
exist: images-cache
at 
org.apache.ignite.internal.IgniteKernal.notifyLifecycleBeans(IgniteKernal.java:809) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at 
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1386) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2052) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1698) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1114) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:612) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:66) 
~[ignite-spring-2.9.1.jar!/:2.9.1]
at 
org.apache.ignite.IgniteSpringBean.afterSingletonsInstantiated(IgniteSpringBean.java:174) 
~[ignite-spring-2.9.1.jar!/:2.9.1]

... 19 common frames omitted
Caused by: java.lang.IllegalStateException: Cache doesn't exist: 
images-cache
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.cacheConfiguration(GridCacheProcessor.java:4515) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.(DataStreamerImpl.java:308) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.dataStreamer(DataStreamProcessor.java:183) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at 
org.apache.ignite.internal.IgniteKernal.dataStreamer(IgniteKernal.java:3701) 
~[ignite-core-2.9.1.jar!/:2.9.1]
at 
ru.shipa.ignite.persistence.domain.IgniteKafkaLifecycleBean.initKafkaToIgniteStreamer(IgniteKafkaLifecycleBean.kt:72) 
~[classes!/:na]
at 
ru.shipa.ignite.persistence.domain.IgniteKafkaLifecycleBean.onLifecycleEvent(IgniteKafkaLifecycleBean.kt:61) 
~[classes!/:na]
at 
org.apache.ignite.internal.IgniteKernal.notifyLifecycleBeans(IgniteKernal.java:806) 
~[ignite-core-2.9.1.jar!/:2.9.1]

... 26 common frames omitted

--
Yours faithfully,

Vladislav Shipugin

E: vladshipu...@gmail.com 

Re: Intermittent Error

2021-06-15 Thread Ilya Korol

Is there any logs on server side when you face this issue?


16.06.2021 11:19, Moshe Rosten пишет:

Greetings,

I'm attempting to retrieve a list of values from this query line:
List>res =conn.sqlQuery("select * from TWSources");
It works sometimes perfectly fine, and at other times it gives me this 
error:
org.apache.ignite.internal.client.thin.ClientServerError: Ignite 
failed to process request [5]: Failed to set schema for DB connection 
for thread [schema=myCache] (server status code [1])

What could the problem be?




Re: Bug in GridCacheWriteBehindStore

2021-06-11 Thread Ilya Korol
I've created a Jira for this issue 
https://issues.apache.org/jira/browse/IGNITE-14898


10.06.2021 18:56, Ilya Kasnacheev пишет:

Hello!

I guess so. I think you should file a ticket against Ignite JIRA.

Regards,
--
Ilya Kasnacheev


ср, 9 июн. 2021 г. в 20:26, gigabot >:


There's a bug in GridCacheWriteBehindStore in the flusher method.


https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStore.java#L674



The logic states there that if flush thread count is not a power
of 2,  then
perform some math that is not guaranteed to return a positive
number. For
example, if you pass this string as a key it returns a negative
number:
 accb2e8ea33e4a89b4189463cacc3c4e

and then throws an array out of bounds exception when looking up
the thread.

I'm surprised this bug has been there so long, I guess people are not
setting their thread counts to non-powers-of-2.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Write Behind slowing down performance greatly

2021-06-07 Thread Ilya Korol
Yes, you're right. There is no remote listeners. I mean do you use only 
local listener or also configured remote filters? (Excuse me for being 
not clear enough).


Did you configure backups for cache partitions? If so at which node you 
are listening for event (primary or backup)?


Try to initialize event listeners on both nodes. How big is the time 
skew between CACHE PUT event appearing on primary and backup node?



08.06.2021 06:23, gigabot пишет:

Hi I'm using a local listener ( Is there any other type?). I can't share a
reproducer because of workplace restrictions. I am seeing data that was put
via another node, so I'm not clear what you mean about remote node puts not
being visible.

The documentation here
https://ignite.apache.org/docs/latest/key-value-api/continuous-queries only
mentions local listeners and remote filters, is there a concept of a remote
listener?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Write Behind slowing down performance greatly

2021-06-07 Thread Ilya Korol
What kind of listener you used? Could you share your configuration or 
link to a reproducer?


I've checked write behind feature with mysql instance and don't find any 
performance loss (i receive cache put events instantly via registered 
listener but they appear in DB with configured time skew/batch size. 
Ignite cluster and mysql server were on same computer)


Note that if you use local listener cache PUTS operation that happens on 
remote node (due to affinity function distribution) will not be visible 
on local node.


Concerning flush buffer location it is located in heap memory. Check out 
*org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore* 
class.


This component is responsible for spreading data among so called 
flushers. Flusher is a worker-like component that has queue of data 
chunks prepared for transmission to target storage (e.g. mysql).




04.06.2021 01:32, gigabot пишет:

  I observed that turning on writeThrough with writeBehind slows down
performance(i.e. time data reaches listeners) greatly. I tried setting the
flush frequency to 0 and the flush size to 1 billion, and so the flushes are
not even happening yet, and still the performance is slow. I am wondering
where the slowdown is occurring? Where is the flush buffer located
(heap/off-heap/disk/etc.) and it is configurable?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to deal with data expiration time flexibly

2021-05-18 Thread Ilya Korol
Hi, afaik unfortunately there is no opportunity to implement custom 
ExpiryPolicy that will rely on cache entry value field for expiry 
calculation because there is no any hook for intercepting value during 
expiry policy creation or calls.


But you can use a little hacky trick with cache proxy like:


public IgniteCache put(IgniteCache cache, Long 
key, Value val) {
    ExpiryPolicy expiryPolicy = new TouchedExpiryPolicy(new 
Duration(TimeUnit.MILLISECONDS, val.getId());
    IgniteCache cacheProxy = 
cache.withExpiryPolicy(expiryPolicy);

    cacheProxy.put(key, val);
    return cacheProxy;
}

IgniteCache cache = ignite.cache("myCache");

for (long i = 0; i < 10_000L; i++) {
    Value val = new Value(i, "value-" + i);
    cache = put(cache, i, val);
}


17.05.2021 14:09, 38797715 пишет:


Hello team,

At present, only a few simple expiration policies can be configured, 
such as CreatedExpiryPolicy.


If want to use a field value to determine the expiration time of the 
data, what should we do? Or what interface is there for extension?