Re: Requesting mapping from grid failed issue with Ignite 2.9.0 and C# model with ICloneable interface but same was working with Ignite 2.8.1

2021-01-04 Thread Ilya Kazakov
Hello, Charlin!

Try to enable peer class loading:
https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading

--
Ilya Kazakov

вт, 5 янв. 2021 г. в 14:52, Charlin S :

> Hi,
>
> i'm getting exception on when a new record is added, which was working
> till ignite 2.8.1
> This issue seems to be with Ignite 2.9.0 and Ignite 2.9.1
> The c# code is below:
> void Main()
> {
> A a = new A();
> }
>
> public class A
> {
> public A()
> {
>
>  IgniteConfiguration igniteGridIg = new IgniteConfiguration();
> igniteGridIg.AutoGenerateIgniteInstanceName = true;
> igniteGridIg.IgniteHome =
> @"D:\Software\apache-ignite-2.9.1-bin";
> igniteGridIg.SpringConfigUrl =
> Path.Combine(@"D:\IgniteConfig\",
> "common_dynamiccache_client_config_2.9.1.xml");
> igniteGridIg.ConsistentId =
> Guid.NewGuid().ToString().ToUpper();
>
> IIgnite StaticGrid_Dev = Ignition.Start(igniteGridIg);
> TestModel29WithICloneable model = new TestModel29WithICloneable();
>
> model.TestField1 = "11";
> model.TestField2 = "22";
>
> ICache
> TestModel29WithICloneableICache=null;
>CacheConfiguration cgTest = new
> CacheConfiguration("TestModel29WithICloneable", new
> QueryEntity(typeof(string), typeof(TestModel29WithICloneable)));
> cgTest.CopyOnRead = false;
> cgTest.EagerTtl = true;
> cgTest.Backups = 1;
>
> var cacheName = StaticGrid_Dev.GetOrCreateCache object>(cgTest).WithExpiryPolicy(new ExpiryPolicy(
>  TimeSpan.FromSeconds(3600),
>  TimeSpan.FromSeconds(3600),
>  TimeSpan.FromSeconds(3600)
>  ));
>
> cacheName.Put("TestModel29WithICloneable:Test|0100010test2", model);
>
> //Console.Write(cacheName);
> Ignition.StopAll(true);
> }
> }
>
>
> Model class:
> public class TestModel29WithICloneable : ICloneable, IBinarizable
> {
> public TestModel29WithICloneable Copy()
> {
> return (TestModel29WithICloneable)this.MemberwiseClone();
> }
>
> public object Clone()
> {
> var clone = this.MemberwiseClone();
> return clone;
> }
> public string TestField1 { get; set; }
> public string TestField2 { get; set; }
> public string TestField3 { get; set; }
>
> public void ReadBinary(IBinaryReader reader)
> {
> if (reader != null)
> {
> TestField1 = reader.ReadString("testfield1");
> TestField2 = reader.ReadString("testfield2");
> TestField3 = reader.ReadString("testfield3");
> }
> }
>
> public void WriteBinary(IBinaryWriter writer)
> {
> if (writer != null)
> {
> writer.WriteString("testfield1", TestField1);
> writer.WriteString("testfield2", TestField2);
> writer.WriteString("testfield3", TestField3);
> }
> }
> }
>
> Result :
> javax.cache.CacheException: class
> org.apache.ignite.IgniteCheckedException: Requesting mapping from grid
> failed for [platformId=0, typeId=1876507903]
>   at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1270)
>   at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:2083)
>   at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1319)
>   at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:856)
>   at
> org.apache.ignite.internal.processors.platform.cache.PlatformCache.processInStreamOutLong(PlatformCache.java:839)
>   at
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutLong(PlatformTargetProxyImpl.java:67)
> Caused by: class org.apache.ignite.IgniteCheckedException: Requesting
> mapping from grid failed for [platformId=0, typeId=1876507903]
>   at
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7563)
>   at
> org.apache.ignite.internal.processors.cache.GridCacheContext.validateKeyAndValue(GridCacheContext.java:1910)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapSingleUpdate(GridNearAtomicSingleUpdateFuture.java:555)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:457)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:446)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:249)
>   at
> org.a

Requesting mapping from grid failed issue with Ignite 2.9.0 and C# model with ICloneable interface but same was working with Ignite 2.8.1

2021-01-04 Thread Charlin S
Hi,

i'm getting exception on when a new record is added, which was working till
ignite 2.8.1
This issue seems to be with Ignite 2.9.0 and Ignite 2.9.1
The c# code is below:
void Main()
{
A a = new A();
}

public class A
{
public A()
{

 IgniteConfiguration igniteGridIg = new IgniteConfiguration();
igniteGridIg.AutoGenerateIgniteInstanceName = true;
igniteGridIg.IgniteHome =
@"D:\Software\apache-ignite-2.9.1-bin";
igniteGridIg.SpringConfigUrl =
Path.Combine(@"D:\IgniteConfig\",
"common_dynamiccache_client_config_2.9.1.xml");
igniteGridIg.ConsistentId = Guid.NewGuid().ToString().ToUpper();

IIgnite StaticGrid_Dev = Ignition.Start(igniteGridIg);
TestModel29WithICloneable model = new TestModel29WithICloneable();

model.TestField1 = "11";
model.TestField2 = "22";

ICache
TestModel29WithICloneableICache=null;
   CacheConfiguration cgTest = new
CacheConfiguration("TestModel29WithICloneable", new
QueryEntity(typeof(string), typeof(TestModel29WithICloneable)));
cgTest.CopyOnRead = false;
cgTest.EagerTtl = true;
cgTest.Backups = 1;

var cacheName = StaticGrid_Dev.GetOrCreateCache(cgTest).WithExpiryPolicy(new ExpiryPolicy(
 TimeSpan.FromSeconds(3600),
 TimeSpan.FromSeconds(3600),
 TimeSpan.FromSeconds(3600)
 ));

cacheName.Put("TestModel29WithICloneable:Test|0100010test2", model);

//Console.Write(cacheName);
Ignition.StopAll(true);
}
}


Model class:
public class TestModel29WithICloneable : ICloneable, IBinarizable
{
public TestModel29WithICloneable Copy()
{
return (TestModel29WithICloneable)this.MemberwiseClone();
}

public object Clone()
{
var clone = this.MemberwiseClone();
return clone;
}
public string TestField1 { get; set; }
public string TestField2 { get; set; }
public string TestField3 { get; set; }

public void ReadBinary(IBinaryReader reader)
{
if (reader != null)
{
TestField1 = reader.ReadString("testfield1");
TestField2 = reader.ReadString("testfield2");
TestField3 = reader.ReadString("testfield3");
}
}

public void WriteBinary(IBinaryWriter writer)
{
if (writer != null)
{
writer.WriteString("testfield1", TestField1);
writer.WriteString("testfield2", TestField2);
writer.WriteString("testfield3", TestField3);
}
}
}

Result :
javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException:
Requesting mapping from grid failed for [platformId=0, typeId=1876507903]
  at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1270)
  at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:2083)
  at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1319)
  at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:856)
  at
org.apache.ignite.internal.processors.platform.cache.PlatformCache.processInStreamOutLong(PlatformCache.java:839)
  at
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutLong(PlatformTargetProxyImpl.java:67)
Caused by: class org.apache.ignite.IgniteCheckedException: Requesting
mapping from grid failed for [platformId=0, typeId=1876507903]
  at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7563)
  at
org.apache.ignite.internal.processors.cache.GridCacheContext.validateKeyAndValue(GridCacheContext.java:1910)
  at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapSingleUpdate(GridNearAtomicSingleUpdateFuture.java:555)
  at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:457)
  at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:446)
  at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:249)
  at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1178)
  at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:626)
  at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2567)
  at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2544)
  at
o

Re: not able to change keyspace name when Using 3rd Party Persistence (Cassandra) Together with Ignite Native Persistence

2021-01-04 Thread xmw45688
Hi Ilya,

Thanks for your guidance and happy new year!  Sorry for late catchup. 

You are right, Ignite Native Persistence does track the changes in class
definition for "Ignite Native Persistence".  But the configuration to load
data from Cache to Cassandra store is stored in xml configuration file. 
This configuration xml file is passed at runtime (see the example
configuration below). When ignite server is stopped, the cache configuration
is gone, not stored in the any place as far as I understand.  When the
Ignite is restarted, the new xml is passed, there is no previous
configuration for the old class definition in the xml configuration.  

My question - how/where does the ignite server get/read the old class
definition if this class definition is provided in the most recent xml
config file? 

So what I said is that the Cassandra Store implementation may not need to
change.  It's the call that reads the xml configuration and stores the data
in Cassandra Store via Ignite Cache.   If I remove Ignite Native
configuration ("persistenceEnabled" value="false"), then Ignite server uses
the xml configuration file passed at runtime.  The Ignite Cache has all new
class definiton. 

I'd like your help to guide me make the changes to read the xml
configuration at runtime instead of reading the previously cached
configuration.










   






















java.lang.String
com.procurant.catalog.entity.Uom












--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error Codes

2021-01-04 Thread Michael Cherkasov
Hi Ilya,

It's about logs only, I don't think we need this at the API level. Error
codes will make the solutions more searchable.
Plus we can build troubleshooting guides based on it, it will help us
gather information from user list and StackOverflow.

Even a solution for trivial cases will be helpful, once I was requested to
join the call late evening because ignite failed to copy WAL file and there
just was no space on the disk.
While the error was obvious for me, it's not obvious for all users.

Let's start from something simple, just assign error codes to
absolutely all exceptions first. So next year or two user list will full of
error codes and solutions for them.

Might be it's a change for Ignite 3.0? @Val, I think you can help with this
question.

Any thoughts/comments?

Thanks,
Mike.

сб, 2 янв. 2021 г. в 12:18, Ilya Kasnacheev :

> Hello!
>
> I don't think there's a direct link between an exception thrown in depths
> of Ignite code, and specific error which may be reported to user.
>
> A notorious example is CorruptedTreeException which is known to be thrown
> due to incorrect field type in binary object or bad SQL cast. So we could
> document it "If you get IGN13 error this means your persistence is
> corrupted beyond repair. This, or you have a typo in your SQL." - of course
> it will not help anyone.
>
> This means we can't get to the desired result by application of 1.
>
> There's got to be a different plan. First of all, we need to decide what's
> our target. Is it log, or is it API?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 1 янв. 2021 г. в 02:07, Michael Cherkasov  >:
>
>> Hi folks,
>>
>> I was thinking how we can simplify Ignite clusters troubleshooting and
>> the best of course if the cluster can do self-healing, like transaction
>> cancellation if tx blocks exchange or note restart on OOM error. However,
>> sometimes those mechanisms don't work well or user interaction is required.
>> Not all errors are obvious for users and it's not clear what actions
>> required to restore the cluster.
>> If you google exceptions or error messages and the results can be
>> ambiguous and not certain because different errors can have similar
>> exceptions and you need to analyze stack trace to distinguish them. So
>> googling isn't a straight and easy process in this case.
>> Almost all major DBs have error codes[1][2][3]
>> Let's do the same for Ignite, error codes easy to google, so user/dev
>> list will be significantly more useful. We can have documentation with an
>> error code registry and solutions for the errors.
>>
>> To implement this we need to do the following:
>> 1. all error messages/exceptions must have a unique error code(so, all
>> new PR must NOT be accepted if any exceptions/errors don't have error
>> codes.)
>> 2. to avoid error code duplication, all error codes will be stored as
>> files under some folder.
>> 3. those files can be a source of documentation for this error code.
>>
>> All this files can be empty, but futher, if exception will apper on user
>> list and someone will find solution, first, other people can easialy google
>> it by error code, and second, we can build documentation for this error
>> code base on user-list thread/stackoverflow/other source.
>>
>> Any thoughts?
>>
>> [1] Mysql
>> https://dev.mysql.com/doc/refman/8.0/en/error-message-elements.html
>> [2] OracleDB https://docs.oracle.com/pls/db92/db92.error_search
>> [3] PostgreSQL https://www.postgresql.org/docs/10/errcodes-appendix.html
>>
>> Thanks,
>> Mike.
>>
>


Re: [ANNOUNCE] Apache Ignite 2.9.1 Released

2021-01-04 Thread Yaroslav Molochkov
Hello!

Yeah, I think Ilya is completely right and there's a transitive
compatibility in your case.

On Mon, Jan 4, 2021 at 3:20 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Don't see why you can't upgrade 2.7.6 -> 2.9.1 straight.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 4 янв. 2021 г. в 15:17, ihalilaltun :
>
>> Hi Yaroslav,
>>
>> We are at v2.7.6 and want to upgrade the latest version, do you have any
>> directives for this kind of upgrade methodology?
>> can we upgrade to latest version without any problem by default or should
>> we
>> upgrade version by version?
>>
>> 2.7.6 -> 2.8 -> 2.8.1 -> 2.9 then 2.9.1
>>
>> thanks
>>
>>
>>
>> -
>> İbrahim Halil Altun
>> Senior Software Engineer @ Segmentify
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: [ANNOUNCE] Apache Ignite 2.9.1 Released

2021-01-04 Thread Ilya Kasnacheev
Hello!

Don't see why you can't upgrade 2.7.6 -> 2.9.1 straight.

Regards,
-- 
Ilya Kasnacheev


пн, 4 янв. 2021 г. в 15:17, ihalilaltun :

> Hi Yaroslav,
>
> We are at v2.7.6 and want to upgrade the latest version, do you have any
> directives for this kind of upgrade methodology?
> can we upgrade to latest version without any problem by default or should
> we
> upgrade version by version?
>
> 2.7.6 -> 2.8 -> 2.8.1 -> 2.9 then 2.9.1
>
> thanks
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [ANNOUNCE] Apache Ignite 2.9.1 Released

2021-01-04 Thread ihalilaltun
Hi Yaroslav,

We are at v2.7.6 and want to upgrade the latest version, do you have any
directives for this kind of upgrade methodology? 
can we upgrade to latest version without any problem by default or should we
upgrade version by version? 

2.7.6 -> 2.8 -> 2.8.1 -> 2.9 then 2.9.1

thanks



-
İbrahim Halil Altun
Senior Software Engineer @ Segmentify
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Critical Workers Health Check on client side

2021-01-04 Thread ihalilaltun
hi there,

I am curious about whether we can manage somehow *Critical Workers Health
Check*on client side? What i need to do is catch critical workers health
check results on client side, can this be done by implementing custom
StopNodeOrHaltFailureHandler on client side?

We are on ignite v2.7.6

thanks



-
İbrahim Halil Altun
Senior Software Engineer @ Segmentify
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Service grid performance downgrade after enabling persistence

2021-01-04 Thread Ilya Kasnacheev
Hello!

I don't think this should happen. What do thread dumps show?

Regards,
-- 
Ilya Kasnacheev


чт, 31 дек. 2020 г. в 07:47, xingjl6280 :

> Hi team,
>
> Just wandering what happened behind a remote service call.
> Is there anything persisted for a service call? Or the service call is
> actually based on memory grid?
>
> thank you
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Feature request: On demand thread dumps from Ignite

2021-01-04 Thread Ilya Kasnacheev
Hello!

I guess you could find a way to run ThreadDumpPrinterTask or better yet a
VisorThreadDumpTask.

I guess we should have thread dumping in the upcoming ignitectl.

Can you file a ticket?

Regards,
-- 
Ilya Kasnacheev


вт, 29 дек. 2020 г. в 10:42, Raymond Wilson :

> Hi Zhenya,
>
> We use the IA C# client (deployed in a.Net Core implementation using
> containers on AWS EKS) so this makes it hard for us to run Java closures
> from the C# client, which is why a client interface capability would be
> useful!
>
> Thanks,
> Raymond.
>
>
> On Tue, Dec 29, 2020 at 8:38 PM Zhenya Stanilovsky 
> wrote:
>
>>
>> You can call it through compute api [1], i suppose.
>>
>> [1]
>> https://ignite.apache.org/docs/latest/distributed-computing/distributed-computing
>>
>>
>> Many of the discussion threads here will generate a request for the Jave
>> Ignite thread dump to help triage an issue.
>>
>> This is not difficult to do with command line Java tooling if you can
>> easily access the server running the node. However, access to those nodes
>> may not be simple (especially in production) and requires hands-on manual
>> intervention to produce.
>>
>> There does not seem to be a way for an Ignite client (eg: the C# client
>> we use in our implementation) to ask the local Ignite node to dump the
>> thread state to the log based on conditions the client itself may determine.
>>
>> If this is actually the case then please point me at it:) Otherwise, is
>> this something worth adding to the backlog?
>>
>> Thanks,
>> Raymond.
>>
>> --
>> 
>> Raymond Wilson
>> Solution Architect, Civil Construction Software Systems (CCSS)
>> 11 Birmingham Drive | Christchurch, New Zealand
>> +64-21-2013317 Mobile
>> raymond_wil...@trimble.com
>> 
>>
>>
>>
>> 
>>
>>
>>
>>
>>
>>
>
>
> --
> 
> Raymond Wilson
> Solution Architect, Civil Construction Software Systems (CCSS)
> 11 Birmingham Drive | Christchurch, New Zealand
> +64-21-2013317 Mobile
> raymond_wil...@trimble.com
>
>
> 
>


Re: scanquery is not working, however SQL select and retrieving a single records works fine

2021-01-04 Thread Ilya Kasnacheev
Hello!

You could try to pinpoint a specific partition of cache by doing
per-partition scan queries on a cache. Then you could probably share
part-NNN.bin with problematic partition with us so that we could check.
This assumes that you have persistence. If you don't, maybe it's easier to
drop and recreate this cache.

Regards,
-- 
Ilya Kasnacheev


чт, 31 дек. 2020 г. в 10:20, Naveen :

> HI
>
> Scanquery on a cache is not working, but simple GET with a key and select
> statement on SQL console working fine. This is the error I get on the
> client
> side
>
> Ignite cluster is unavailable
> [sock=14555e0a[TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384:
> Socket[addr=XXX.XXX.com/XX.XXX.10.65,port=10800,localport=39968]
> ]]
>
>
> And on the server logs, I see the below error
>
> [2020-12-31
> 10:38:03,395][ERROR][client-connector-#79][ClientListenerNioListener]
> Failed
> to process client request
>
> [req=o.a.i.i.processors.platform.client.cache.ClientCacheScanQueryRequest@18427c6f
> ]
> java.util.NoSuchElementException
> at
>
> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.nextX(GridCloseableIteratorAdapter.java:39)
> at
>
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.next(GridIteratorAdapter.java:35)
> at
>
> org.apache.ignite.internal.processors.cache.AutoClosableCursorIterator.next(AutoClosableCursorIterator.java:59)
> at
>
> org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryCursor.writePage(ClientCacheQueryCursor.java:78)
> at
>
> org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryResponse.encode(ClientCacheQueryResponse.java:51)
> at
>
> org.apache.ignite.internal.processors.platform.client.ClientMessageParser.encode(ClientMessageParser.java:406)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:210)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:49)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
>
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
>
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
>
> This is the code
>
>  ClientCache cache =
> ignite.cache(cacheName).withKeepBinary();
>  try (QueryCursor> cursor
> =
> cache.query(new ScanQuery())) {
> for (Cache.Entry entry :
> cursor) {
>
> It does work on our Dev cluster though, but not working on UAT, what could
> be the issue.
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Re[4]: Questions related to check pointing

2021-01-04 Thread Ilya Kasnacheev
Hello!

I guess it's pool.pages() * 3L / 4
Since, counter intuitively, the default ThrottlingPolicy is not
ThrottlingPolicy.DISABLED. It's CHECKPOINT_BUFFER_ONLY.

Regards,

-- 
Ilya Kasnacheev


чт, 31 дек. 2020 г. в 04:33, Raymond Wilson :

> Regards this section of code:
>
> maxDirtyPages = throttlingPlc != ThrottlingPolicy.DISABLED
> ? pool.pages() * 3L / 4
> : Math.min(pool.pages() * 2L / 3, cpPoolPages);
>
> I think the correct ratio will be 2/3 of pages as we do not have a
> throttling policy defined, correct?.
>
> On Thu, Dec 31, 2020 at 12:49 AM Zhenya Stanilovsky 
> wrote:
>
>> Correct code is running from here:
>>
>> if (checkpointReadWriteLock.getReadHoldCount() > 1 || 
>> safeToUpdatePageMemories() || checkpointer.runner() == null)
>> break;else {
>> CheckpointProgress pages = checkpointer.scheduleCheckpoint(0, "too many 
>> dirty pages");
>>
>> and near you can see that :
>> maxDirtyPages = throttlingPlc != ThrottlingPolicy.DISABLED? pool.pages() 
>> * 3L / 4: Math.min(pool.pages() * 2L / 3, cpPoolPages);
>>
>> Thus if ¾ pages are dirty from whole DataRegion pages — will raise this
>> cp.
>>
>>
>> In (
>> https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood),
>> there is a mention of a dirty pages limit that is a factor that can trigger
>> check points.
>>
>> I also found this issue:
>> http://apache-ignite-users.70518.x6.nabble.com/too-many-dirty-pages-td28572.html
>> where "too many dirty pages" is a reason given for initiating a checkpoint.
>>
>> After reviewing our logs I found this: (one example)
>>
>> 2020-12-15 19:07:00,999 [106] INF [MutableCacheComputeServer] Checkpoint
>> started [checkpointId=e2c31b43-44df-43f1-b162-6b6cefa24e28,
>> startPtr=FileWALPointer [idx=6339, fileOff=243287334, len=196573],
>> checkpointBeforeLockTime=99ms, checkpointLockWait=0ms,
>> checkpointListenersExecuteTime=16ms, checkpointLockHoldTime=32ms,
>> walCpRecordFsyncDuration=113ms, writeCheckpointEntryDuration=27ms,
>> splitAndSortCpPagesDuration=45ms, pages=33421, reason='too many dirty
>> pages']
>>
>> Which suggests we may have the issue where writes are frozen until the
>> check point is completed.
>>
>> Looking at the AI 2.8.1 source code, the dirty page limit fraction
>> appears to be 0.1 (10%), via this entry
>> in GridCacheDatabaseSharedManager.java:
>>
>> /**
>>  * Threshold to calculate limit for pages list on-heap caches.
>>  * 
>>
>>  * Note: When a checkpoint is triggered, we need some amount of page 
>> memory to store pages list on-heap cache.
>>
>>  * If a checkpoint is triggered by "too many dirty pages" reason and 
>> pages list cache is rather big, we can get
>>
>> * {@code IgniteOutOfMemoryException}. To prevent this, we can limit the 
>> total amount of cached page list buckets,
>>
>>  * assuming that checkpoint will be triggered if no more then 3/4 of 
>> pages will be marked as dirty (there will be
>>
>>  * at least 1/4 of clean pages) and each cached page list bucket can be 
>> stored to up to 2 pages (this value is not
>>
>>  * static, but depends on PagesCache.MAX_SIZE, so if PagesCache.MAX_SIZE 
>> > PagesListNodeIO#getCapacity it can take
>>
>>  * more than 2 pages). Also some amount of page memory needed to store 
>> page list metadata.
>>  */
>> private static final double PAGE_LIST_CACHE_LIMIT_THRESHOLD = 0.1;
>>
>> This raises two questions:
>>
>> 1. The data region where most writes are occurring has 4Gb allocated to
>> it, though it is permitted to start at a much lower level. 4Gb should be
>> 1,000,000 pages, 10% of which should be 100,000 dirty pages.
>>
>> The 'limit holder' is calculated like this:
>>
>> /**
>>  * @return Holder for page list cache limit for given data region.
>>  */
>> public AtomicLong pageListCacheLimitHolder(DataRegion dataRegion) {
>> if (dataRegion.config().isPersistenceEnabled()) {
>> return pageListCacheLimits.computeIfAbsent(dataRegion.config
>> ().getName(), name -> new AtomicLong(
>> (long)(((PageMemoryEx)dataRegion.pageMemory()).totalPages
>> () * PAGE_LIST_CACHE_LIMIT_THRESHOLD)));
>> }
>>
>> return null;
>> }
>>
>> ... but I am unsure if totalPages() is referring to the current size of
>> the data region, or the size it is permitted to grow to. ie: Could the
>> 'dirty page limit' be a sliding limit based on the growth of the data
>> region? Is it better to set the initial and maximum sizes of data regions
>> to be the same number?
>>
>> 2. We have two data regions, one supporting inbound arrival of data (with
>> low numbers of writes), and one supporting storage of processed results
>> from the arriving data (with many more writes).
>>
>> The block on writes due to the number of dirty pages appears to affect
>> all data regions, not just the one which has violated the dirty page limit.
>> Is that correct? If so, is this somethi

Re: scanquery is not working, however SQL select and retrieving a single records works fine

2021-01-04 Thread Naveen
Any suggestions on this issue ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/