HI Andrey 

The reason I was trying to explore this feature is, 

I will give you an example. I have a cache with 20M records and when I run
this query 
SELECT * FROM "AssociatedPartiesCache".ASSOCIATED_PARTIES

Query took more than 200 secs and ran out of memory, here is the error
thrown
This I have tried with lary=true, guess syntax and usage is correct. 
jdbc:ignite:thin://10.144.114.113?lazy=true

SQL Error [50000]: javax.cache.CacheException: Failed to run map query
remotely.Failed to execute map query on the node:
ef5b4e7d-3423-4b84-8427-0491cd13f6c4, class
org.apache.ignite.IgniteCheckedException:Failed to execute SQL query. Out of
memory.; SQL statement:
SELECT
__Z0.ASSOCIATED_PARTY_ID __C0_0,
__Z0.WALLETID __C0_1,
__Z0.UPDATEDDATETIME __C0_2,
__Z0.UPDATEDBY __C0_3,
__Z0.PARTY_ID __C0_4
FROM "AssociatedPartiesCache".ASSOCIATED_PARTIES __Z0 [90108-195]
  javax.cache.CacheException: Failed to run map query remotely.Failed to
execute map query on the node: ef5b4e7d-3423-4b84-8427-0491cd13f6c4, class
org.apache.ignite.IgniteCheckedException:Failed to execute SQL query. Out of
memory.; SQL statement:
SELECT
__Z0.ASSOCIATED_PARTY_ID __C0_0,
__Z0.WALLETID __C0_1,
__Z0.UPDATEDDATETIME __C0_2,
__Z0.UPDATEDBY __C0_3,
__Z0.PARTY_ID __C0_4
FROM "AssociatedPartiesCache".ASSOCIATED_PARTIES __Z0 [90108-195]
  javax.cache.CacheException: Failed to run map query remotely.Failed to
execute map query on the node: ef5b4e7d-3423-4b84-8427-0491cd13f6c4, class
org.apache.ignite.IgniteCheckedException:Failed to execute SQL query. Out of
memory.; SQL statement:
SELECT
__Z0.ASSOCIATED_PARTY_ID __C0_0,
__Z0.WALLETID __C0_1,
__Z0.UPDATEDDATETIME __C0_2,
__Z0.UPDATEDBY __C0_3,
__Z0.PARTY_ID __C0_4
FROM "AssociatedPartiesCache".ASSOCIATED_PARTIES __Z0 [90108-195]

In spite of, lazy=true, I got into this issue.

The way I was looking at possible solution to overcome this issue is. 

Long running queries should not impact server RAM, Ignite should run query
on Native persistent that is on DISK, the query which I am running should
not bring cluster down, I am still fine if the query takes couple of minutes
to execute, but ultimately should not disturb the cluster

After this error, cluster stopped working, we had to restart the cluster
again to make it work.
I believe there are definitely ways to overcome this issue, we do have
billion records in some of the tables, for some reason unknowingly runs
query on those tables, it should not bring down the cluster abtruptly

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to