Re: Node requires maintenance, non-empty set of maintainance tasks is found - node is not coming up

2024-05-29 Thread Naveen Kumar
Thanks very much for your prompt response Gianluca

just for the community, I could solve this by running the control.sh with
reset lost partitions for individual cachereset_lost_partitions
looks like it worked, those partition issue is resolved, I suppose there
wouldnt be any data loss as we have set all our caches with 2 replicas

coming to the node which was not getting added to the cluster earlier,
removed from baseline --> cleared all persistence store --> brought up the
node --> added the node to baseline, this also seems to have worked fine.

Thanks


On Wed, May 29, 2024 at 5:13 PM Gianluca Bonetti 
wrote:

> Hello Naveen
>
> Apache Ignite 2.13 is more than 2 years old, 25 months old in actual fact.
> Three bugfix releases had been rolled out over time up to 2.16 release.
>
> It seems you are restarting your cluster on a regular basis, so you'd
> better upgrade to 2.16 as soon as possible.
> Otherwise it will also be very difficult for people on a community based
> mailing list, on volunteer time, to work out a solution with a 2 years old
> version running.
>
> Besides that, you are not providing very much information about your
> cluster setup.
> How many nodes, what infrastructure, how many caches, overall data size.
> One could only guess you have more than 1 node running, with at least 1
> cache, and non-empty dataset. :)
>
> This document from GridGain may be helpful but I don't see the same for
> Ignite, it may still be worth checking it out.
>
> https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/maintenance-mode
>
> On the other hand you should also check your failing node.
> If it is always the same node failing, then there should be some root
> cause apart from Ignite.
> Indeed if the nodes configuration is the same across all nodes, and just
> this one fails, you should also consider some network issues (check
> connectivity and network latency between nodes) and hardware related issues
> (faulty disks, faulty memory)
> In the end, one option might be to replace the faulty machine with a brand
> new one.
> In cloud environments this is actually quite cheap and easy to do.
>
> Cheers
> Gianluca
>
> On Wed, 29 May 2024 at 08:43, Naveen Kumar 
> wrote:
>
>> Hello All
>>
>> We are using Ignite 2.13.0
>>
>> After a cluster restart, one of the node is not coming up and in node
>> logs are seeing this error - Node requires maintenance, non-empty set of
>> maintainance  tasks is found - node is not coming up
>>
>> we are getting errors like time out is reached before computation is
>> completed error in other nodes as well.
>>
>> I could see that, we have control.sh script to backup and clean up the
>> corrupted files, but when I run the command, it fails.
>>
>> I have removed the node from baseline and tried to run as well, still its
>> failing
>>
>> what could be the solution for this, cluster is functioning,
>> however there are requests failing
>>
>> Is there anyway we can start ignite node in  maintenance mode and try
>> running clean corrupted commands
>>
>> Thanks
>> Naveen
>>
>>
>>

-- 
Thanks & Regards,
Naveen Bandaru


Node requires maintenance, non-empty set of maintainance tasks is found - node is not coming up

2024-05-29 Thread Naveen Kumar
Hello All

We are using Ignite 2.13.0

After a cluster restart, one of the node is not coming up and in node logs
are seeing this error - Node requires maintenance, non-empty set of
maintainance  tasks is found - node is not coming up

we are getting errors like time out is reached before computation is
completed error in other nodes as well.

I could see that, we have control.sh script to backup and clean up the
corrupted files, but when I run the command, it fails.

I have removed the node from baseline and tried to run as well, still its
failing

what could be the solution for this, cluster is functioning, however there
are requests failing

Is there anyway we can start ignite node in  maintenance mode and try
running clean corrupted commands

Thanks
Naveen


Re: BinaryObjectException

2021-11-22 Thread Naveen Kumar
Thanks Alex..
Here it says it throws this exception when ignite does not find this record
In our case, data exists and it get resolved if we do a node restart

And, any work-around we have to fix this  issue with current version ignite
2.8.1

Thanks

On Mon, Nov 22, 2021 at 12:20 PM Alex Plehanov 
wrote:

> Hello!
>
> Most probably it's related to ticket [1] that is fixed in 2.9 release.
>
> [1]: https://issues.apache.org/jira/browse/IGNITE-13192
>
> пн, 22 нояб. 2021 г. в 03:11, Naveen Kumar :
>
>> Hi All
>>
>> We are using 2.8.1
>> At times, we do get this BinaryObjectException while calling GETs thru
>> thin client, we dont see any errors or exceptions on node logs, this only
>> be seen on the cline tside.
>> what could be the potential reason for this
>>
>> Attached the exact error message
>>
>>
>>
>>
>> Thanks
>>
>> --
>> Thanks & Regards,
>> Naveen Bandaru
>>
>

-- 
Thanks & Regards,
Naveen Bandaru


BinaryObjectException

2021-11-21 Thread Naveen Kumar
Hi All

We are using 2.8.1
At times, we do get this BinaryObjectException while calling GETs thru thin
client, we dont see any errors or exceptions on node logs, this only be
seen on the cline tside.
what could be the potential reason for this

Attached the exact error message




Thanks

-- 
Thanks & Regards,
Naveen Bandaru


Re: Re[2]: apache ignite 2.10.0 heap starvation

2021-10-13 Thread Naveen Kumar
heap dump generation does not seems to be working.
whenever I tried to generate the heap dump, node is going down, bit strange,
what else we could analyze

On Tue, Oct 12, 2021 at 7:35 PM Zhenya Stanilovsky 
wrote:

> hi, highly likely the problem in your code - cpu usage grow synchronously
> with heap increasing between 00.00 and 12.00.
> You need to analyze heap dump, no additional settings will help here.
>
>
> On the same subject, we have made the changes as suggested
>
> nodes are running on 8 CORE and 128 GB MEM VMs, i've added the following
> jvm parameters
>
> -XX:ParallelGCThreads=4
> -XX:ConcGCThreads=2
> -XX:MaxGCPauseMillis=200
> -XX:InitiatingHeapOccupancyPercent=40
>
> Not used any of these below, using the default values for all these,
> which is 8 (as the number of cores)
>
> 
> 
> 
> 
>
> I could still see our heap is increasing,  but atleast I could see a
> pattern now (not like earlier which is almost exponential)
>
> Attaching the screenshots of heap, CPU, GC and start script with all the
> jvm arguments used.
> what do you think I should be changing to run to use heap effectively
>
>
>
> On Wed, Sep 29, 2021 at 2:35 PM Ibrahim Altun <
> ibrahim.al...@segmentify.com
> >
> wrote:
>
> after many configuration changes and optimizations, i think i've solved
> the heap problem.
>
> here are the changes that i applied to the system;
> JVM changes ->
> https://medium.com/@hoan.nguyen.it/how-did-g1gc-tuning-flags-affect-our-back-end-web-app-c121d38dfe56
> helped a lot
>
> nodes are running on 12CORE and 64GB MEM servers, i've added the following
> jvm parameters
>
> -XX:ParallelGCThreads=6
> -XX:ConcGCThreads=2
> -XX:MaxGCPauseMillis=200
> -XX:InitiatingHeapOccupancyPercent=40
>
> on ignite configuration i've changed all thread pool sizes, which were
> much more than these;
> 
> 
> 
> 
> 
> 
> 
>
> Here is the 16 hours of GC report;
>
> https://gceasy.io/diamondgc-report.jsp?p=c2hhcmVkLzIwMjEvMDkvMjkvLS1nYy5sb2cuMC5jdXJyZW50LS04LTU4LTMx=WEB
>
>
>
> On 2021/09/27 17:11:21, Ilya Korol  > wrote:
> > Actually Query interface doesn't define close() method, but QueryCursor
> > does.
> > In your snippets you're using try-with-resource construction for SELECT
> > queries which is good, but when you run MERGE INTO query you would also
> > get an QueryCursor as a result of
> >
> > igniteCacheService.getCache(ID,
> IgniteCacheType.LABEL).query(insertQuery);
> >
> > so maybe this QueryCursor objects still hold some resources/memory.
> > Javadoc for QueryCursor states that you should always close cursors.
> >
> > To simplify cursor closing there is a cursor.getAll() method that will
> > do this for you under the hood.
> >
> >
> > On 2021/09/13 06:17:21, Ibrahim Altun  > wrote:
> >  > Hi Ilya,>
> >  >
> >  > since this is production environment i could not risk to take heap
> > dump for now, but i will try to convince my superiors to get one and
> > analyze it.>
> >  >
> >  > Queries are heavily used in our system but aren't they autoclosable
> > objects? do we have to close them anyway?>
> >  >
> >  > here are some usage examples on our system;>
> >  > --insert query is like this; MERGE INTO "ProductLabel" ("productId",
> > "label", "language") VALUES (?, ?, ?)>
> >  > igniteCacheService.getCache(ID,
> > IgniteCacheType.LABEL).query(insertQuery);>
> >  >
> >  > another usage example;>
> >  > --sqlFieldsQuery is like this; >
> >  > String sql = "SELECT _val FROM \"UserRecord\" WHERE \"email\" IN
> (?)";>
> >  > SqlFieldsQuery sqlFieldsQuery = new SqlFieldsQuery(sql);>
> >  > sqlFieldsQuery.setLazy(true);>
> >  > sqlFieldsQuery.setArgs(emails.toArray());>
> >  >
> >  > try (QueryCursor> ignored = igniteCacheService.getCache(ID,
> > IgniteCacheType.USER).query(sqlFieldsQuery)) {...}>
> >  >
> >  >
> >  >
> >  > On 2021/09/12 20:28:09, Shishkov Ilya  > wrote: >
> >  > > Hi, Ibrahim!>
> >  > > Have you analyzed the heap dump of the server node JVMs?>
> >  > > In case your application executes queries are their cursors closed?>
> >  > > >
> >  > > пт, 10 сент. 2021 г. в 11:54, Ibrahim Altun  >:>
> >  > > >
> >  > > > Igniters any comment on this issue, we are facing huge GC
> > problems on>
> >  > > > production environment, please advise.>
> >  > > >>
> >  > > > On 2021/09/07 14:11:09, Ibrahim Altun  >>
> >  > > > wrote:>
> >  > > > > Hi,>
> >  > > > >>
> >  > > > > totally 400 - 600K reads/writes/updates>
> >  > > > > 12core>
> >  > > > > 64GB RAM>
> >  > > > > no iowait>
> >  > > > > 10 nodes>
> >  > > > >>
> >  > > > > On 2021/09/07 12:51:28, Piotr Jagielski  > wrote:>
> >  > > > > > Hi,>
> >  > > > > > Can you provide some information on how you use the cluster?
> > How many>
> >  > > > reads/writes/updates per second? Also CPU / RAM spec of cluster
> > nodes?>
> >  > > > > >>
> >  > > > > > We observed full GC / CPU load / OOM killer when loading big
> > amount of>
> >  > > > data (15 mln records, data streamer + 

Re: apache ignite 2.10.0 heap starvation

2021-09-29 Thread Naveen Kumar
Good to hear from you , I have had the same issue for quite a long time and
am still looking for a fix.

What do you think has exactly resolved the heap starvation issue, is it the
GC related configuration or the threadpool configuration. ?
 Default thread pool is the number of the cores of the server, if this is
true, we don't need to specify any config for all these thread pool

Thanks
Naveen



On Wed, Sep 29, 2021 at 2:35 PM Ibrahim Altun 
wrote:

> after many configuration changes and optimizations, i think i've solved
> the heap problem.
>
> here are the changes that i applied to the system;
> JVM changes ->
> https://medium.com/@hoan.nguyen.it/how-did-g1gc-tuning-flags-affect-our-back-end-web-app-c121d38dfe56
> helped a lot
>
> nodes are running on 12CORE and 64GB MEM servers, i've added the following
> jvm parameters
>
> -XX:ParallelGCThreads=6
> -XX:ConcGCThreads=2
> -XX:MaxGCPauseMillis=200
> -XX:InitiatingHeapOccupancyPercent=40
>
> on ignite configuration i've changed all thread pool sizes, which were
> much more than these;
> 
> 
> 
> 
> 
> 
> 
>
> Here is the 16 hours of GC report;
>
> https://gceasy.io/diamondgc-report.jsp?p=c2hhcmVkLzIwMjEvMDkvMjkvLS1nYy5sb2cuMC5jdXJyZW50LS04LTU4LTMx=WEB
>
>
>
> On 2021/09/27 17:11:21, Ilya Korol  wrote:
> > Actually Query interface doesn't define close() method, but QueryCursor
> > does.
> > In your snippets you're using try-with-resource construction for SELECT
> > queries which is good, but when you run MERGE INTO query you would also
> > get an QueryCursor as a result of
> >
> > igniteCacheService.getCache(ID,
> IgniteCacheType.LABEL).query(insertQuery);
> >
> > so maybe this QueryCursor objects still hold some resources/memory.
> > Javadoc for QueryCursor states that you should always close cursors.
> >
> > To simplify cursor closing there is a cursor.getAll() method that will
> > do this for you under the hood.
> >
> >
> > On 2021/09/13 06:17:21, Ibrahim Altun  wrote:
> >  > Hi Ilya,>
> >  >
> >  > since this is production environment i could not risk to take heap
> > dump for now, but i will try to convince my superiors to get one and
> > analyze it.>
> >  >
> >  > Queries are heavily used in our system but aren't they autoclosable
> > objects? do we have to close them anyway?>
> >  >
> >  > here are some usage examples on our system;>
> >  > --insert query is like this; MERGE INTO "ProductLabel" ("productId",
> > "label", "language") VALUES (?, ?, ?)>
> >  > igniteCacheService.getCache(ID,
> > IgniteCacheType.LABEL).query(insertQuery);>
> >  >
> >  > another usage example;>
> >  > --sqlFieldsQuery is like this; >
> >  > String sql = "SELECT _val FROM \"UserRecord\" WHERE \"email\" IN
> (?)";>
> >  > SqlFieldsQuery sqlFieldsQuery = new SqlFieldsQuery(sql);>
> >  > sqlFieldsQuery.setLazy(true);>
> >  > sqlFieldsQuery.setArgs(emails.toArray());>
> >  >
> >  > try (QueryCursor> ignored = igniteCacheService.getCache(ID,
> > IgniteCacheType.USER).query(sqlFieldsQuery)) {...}>
> >  >
> >  >
> >  >
> >  > On 2021/09/12 20:28:09, Shishkov Ilya  wrote: >
> >  > > Hi, Ibrahim!>
> >  > > Have you analyzed the heap dump of the server node JVMs?>
> >  > > In case your application executes queries are their cursors closed?>
> >  > > >
> >  > > пт, 10 сент. 2021 г. в 11:54, Ibrahim Altun  >:>
> >  > > >
> >  > > > Igniters any comment on this issue, we are facing huge GC
> > problems on>
> >  > > > production environment, please advise.>
> >  > > >>
> >  > > > On 2021/09/07 14:11:09, Ibrahim Altun >
> >  > > > wrote:>
> >  > > > > Hi,>
> >  > > > >>
> >  > > > > totally 400 - 600K reads/writes/updates>
> >  > > > > 12core>
> >  > > > > 64GB RAM>
> >  > > > > no iowait>
> >  > > > > 10 nodes>
> >  > > > >>
> >  > > > > On 2021/09/07 12:51:28, Piotr Jagielski  wrote:>
> >  > > > > > Hi,>
> >  > > > > > Can you provide some information on how you use the cluster?
> > How many>
> >  > > > reads/writes/updates per second? Also CPU / RAM spec of cluster
> > nodes?>
> >  > > > > >>
> >  > > > > > We observed full GC / CPU load / OOM killer when loading big
> > amount of>
> >  > > > data (15 mln records, data streamer + allowOverwrite=true). We've
> > seen>
> >  > > > 200-400k updates per sec on JMX metrics, but load up to 10 on
> > nodes, iowait>
> >  > > > to 30%. Our cluster is 3 x 4CPU, 16GB RAM (already upgradingto
> > 8CPU, 32GB>
> >  > > > RAM). Ignite 2.10>
> >  > > > > >>
> >  > > > > > Regards,>
> >  > > > > > Piotr>
> >  > > > > >>
> >  > > > > > On 2021/09/02 08:36:07, Ibrahim Altun >
> >  > > > wrote:>
> >  > > > > > > After upgrading from 2.7.1 version to 2.10.0 version ignite
> > nodes>
> >  > > > facing>
> >  > > > > > > huge full GC operations after 24-36 hours after node start.>
> >  > > > > > >>
> >  > > > > > > We try to increase heap size but no luck, here is the start>
> >  > > > configuration>
> >  > > > > > > for nodes;>
> >  > > > > > >>
> >  > > > > > > JVM_OPTS="$JVM_OPTS -Xms12g -Xmx12g 

Re: apache ignite 2.10.0 heap starvation

2021-09-13 Thread Naveen Kumar
Just to add what Ibrahim mentioned, I also have a similar issue but I am
using 2.8.1 and we do have a good number of Insert/merge statements getting
executed.
We do get warnings for some of the MERGE statements, like "*The search row
by explicit key isn't supported. The primary key is always used to used to
search a row". *Does is have any impact on heap utilization

In our case, heap memory more than the off heap memory and doing a
frequent rolling-over restarts to avoid OOM.



On Mon, Sep 13, 2021 at 11:47 AM Ibrahim Altun 
wrote:

> Hi Ilya,
>
> since this is production environment i could not risk to take heap dump
> for now, but i will try to convince my superiors to get one and analyze it.
>
> Queries are heavily used in our system but aren't they autoclosable
> objects? do we have to close them anyway?
>
> here are some usage examples on our system;
> --insert query is like this; MERGE INTO "ProductLabel" ("productId",
> "label", "language") VALUES (?, ?, ?)
> igniteCacheService.getCache(ID, IgniteCacheType.LABEL).query(insertQuery);
>
> another usage example;
> --sqlFieldsQuery is like this;
> String sql = "SELECT _val FROM \"UserRecord\" WHERE \"email\" IN (?)";
> SqlFieldsQuery sqlFieldsQuery = new SqlFieldsQuery(sql);
> sqlFieldsQuery.setLazy(true);
> sqlFieldsQuery.setArgs(emails.toArray());
>
> try (QueryCursor> ignored = igniteCacheService.getCache(ID,
> IgniteCacheType.USER).query(sqlFieldsQuery)) {...}
>
>
>
> On 2021/09/12 20:28:09, Shishkov Ilya  wrote:
> > Hi, Ibrahim!
> > Have you analyzed the heap dump of the server node JVMs?
> > In case your application executes queries are their cursors closed?
> >
> > пт, 10 сент. 2021 г. в 11:54, Ibrahim Altun <
> ibrahim.al...@segmentify.com>:
> >
> > > Igniters any comment on this issue, we are facing huge GC problems on
> > > production environment, please advise.
> > >
> > > On 2021/09/07 14:11:09, Ibrahim Altun 
> > > wrote:
> > > > Hi,
> > > >
> > > > totally 400 - 600K reads/writes/updates
> > > > 12core
> > > > 64GB RAM
> > > > no iowait
> > > > 10 nodes
> > > >
> > > > On 2021/09/07 12:51:28, Piotr Jagielski  wrote:
> > > > > Hi,
> > > > > Can you provide some information on how you use the cluster? How
> many
> > > reads/writes/updates per second? Also CPU / RAM spec of cluster nodes?
> > > > >
> > > > > We observed full GC / CPU load / OOM killer when loading big
> amount of
> > > data (15 mln records, data streamer + allowOverwrite=true). We've seen
> > > 200-400k updates per sec on JMX metrics, but load up to 10 on nodes,
> iowait
> > > to 30%. Our cluster is 3 x 4CPU, 16GB RAM (already upgradingto 8CPU,
> 32GB
> > > RAM). Ignite 2.10
> > > > >
> > > > > Regards,
> > > > > Piotr
> > > > >
> > > > > On 2021/09/02 08:36:07, Ibrahim Altun <
> ibrahim.al...@segmentify.com>
> > > wrote:
> > > > > > After upgrading from 2.7.1 version to 2.10.0 version ignite nodes
> > > facing
> > > > > > huge full GC operations after 24-36 hours after node start.
> > > > > >
> > > > > > We try to increase heap size but no luck, here is the start
> > > configuration
> > > > > > for nodes;
> > > > > >
> > > > > > JVM_OPTS="$JVM_OPTS -Xms12g -Xmx12g -server
> > > > > >
> > >
> -javaagent:/etc/prometheus/jmx_prometheus_javaagent-0.14.0.jar=8090:/etc/prometheus/jmx.yml
> > > > > > -Dcom.sun.management.jmxremote
> > > > > > -Dcom.sun.management.jmxremote.authenticate=false
> > > > > > -Dcom.sun.management.jmxremote.port=49165
> > > > > > -Dcom.sun.management.jmxremote.host=localhost
> > > > > > -XX:MaxMetaspaceSize=256m -XX:MaxDirectMemorySize=1g
> > > > > > -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true
> > > > > > -DIGNITE_WAL_MMAP=true -DIGNITE_BPLUS_TREE_LOCK_RETRIES=10
> > > > > > -Djava.net.preferIPv4Stack=true"
> > > > > >
> > > > > > JVM_OPTS="$JVM_OPTS -XX:+AlwaysPreTouch -XX:+UseG1GC
> > > > > > -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
> > > > > > -XX:+UseStringDeduplication -Xloggc:/var/log/apache-ignite/gc.log
> > > > > > -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> > > > > > -XX:+PrintTenuringDistribution -XX:+PrintGCCause
> > > > > > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
> > > > > > -XX:GCLogFileSize=100M"
> > > > > >
> > > > > > here is the 80 hours of GC analyize report:
> > > > > >
> > >
> https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjEvMDgvMzEvLS1nYy5sb2cuMC5jdXJyZW50LnppcC0tNS01MS0yOQ===WEB
> > > > > >
> > > > > > do we need more heap size or is there a BUG that we need to be
> aware?
> > > > > >
> > > > > > here is the node configuration:
> > > > > >
> > > > > > 
> > > > > > http://www.springframework.org/schema/beans;
> > > > > >xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> > > > > >xsi:schemaLocation="
> > > > > > http://www.springframework.org/schema/beans
> > > > > > http://www.springframework.org/schema/beans/spring-beans.xsd
> ">
> > > > > >  > > > > > class="org.apache.ignite.configuration.IgniteConfiguration">
> > > > > > 
> > > > > >  

Re: Partition states validation has filed for group: CUSTOMER_KV

2021-09-09 Thread Naveen Kumar
Any pointers or clues on this issue.

If it the issue with the source cluster or something to do with the target
cluster ?
Does the clean restart of the source cluster help here in any
way, inconsistent partitions becoming consistent etc ?

Thanks

On Wed, Sep 8, 2021 at 12:12 PM Naveen Kumar 
wrote:

> Hi
>
> We are using Ignite 2.8.1
>
> We are trying to build a new cluster by restoring the datastore from
> another working cluster.
> Steps followed
>
> 1. Stopped the updates on the source cluster
> 2. Took a copy of datastore on each node and transferred to the
> destination node
> 3. started nodes on the destination cluster
>
> AFter the cluster is activated, we could see count mismatch for 2 caches
> (around 15K records) and we found some warnings for these 2 caches.
> Attached the exact warning,
>
> [GridDhtPartitionsExchangeFuture] Partition states validation has failed
> for group: CL_CUSTOMER_KV, msg: Partitions cache sizes are inconsistent for
> part 310: [lvign002b..com=874, lvign001b..com=875] etc..
>
> What could be the reason for this count mismatch.
>
> Thanks
>
>
>
>
>
> --
> Thanks & Regards,
> Naveen Bandaru
>


-- 
Thanks & Regards,
Naveen Bandaru


BinaryObjectException: Unsupported protocol version

2021-08-19 Thread Naveen Kumar
Hi All

We are using Ignite 2.8.1 and using the thin clients majorly.
Facing a strange issues for last couple of days, all PUTs are working fine,
but GETs are failing with a reason : BinaryObjectException: Unsupported
protocol version.

After the node restart, GETs started working fine, and dont see anything
specific in the node logs as well.

Any pointers towards this, how did the node restart resolved the issue.

None of the GETs were working earlier only PUTs were working (PUTs were
done thru JDBC SQL), GERs are using the Java KV API.

-- 
Thanks & Regards,
Naveen Bandaru


subscribe

2020-12-06 Thread Naveen Kumar
Please subscribe me

-- 
Thanks & Regards,
Naveen Bandaru


Re: SQL and backing cache question

2017-12-28 Thread Naveen Kumar
This works, I could query the data.
If we dont have POJOs and use binary objects to read and write, how can
make rest API work.
If I understand correctly, Java classes should be on classpath of the
ignite node to work rest API. How can we make rest API work??

Also, instead of reading field by field from the binary object, can we get
the whole object

Which one works better,

Having POJOs for each table or with binary object to store and write from
the performance point of you.

Thanks
Naveen

On 28-Dec-2017 6:45 PM, "slava.koptilin"  wrote:

Hello Naveen,

It seems, you need to use BinaryObject for that

BinaryObjectBuilder builder =
ignite.binary().builder("com.ril.edif.cityKey");
builder.setField("city_id", new Long(1));
BinaryObject keyValue = builder.build();

IgniteCache cache =
ignite.cache("city_details").withKeepBinary();
BinaryObject po = cache.get(builder.build());

Thanks!





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL and backing cache question

2017-12-21 Thread Naveen Kumar
I do have the same question.

When we execute the DDL statement for creating the table thru SQLLINE
with the value_type as some.package.MWorkPlan, does it create the java
class with this name and load it into JVM.

OR we need to create some.package.MWorkPlan class and refer while
creating the table.

Thanks
Naveen

On Tue, Dec 19, 2017 at 10:12 PM, Ilya Kasnacheev
 wrote:
> Hello!
>
> I think that key_type and value_type should be fully qualified:
>
> key_type=some.package.WorkPlanKey,value_type=some.package.MWorkPlan
>
> If they don't match with type used with put(), you will not see the records
> in SELECT.
>
> Please also share the results of cache.size() after insert is done.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2017-12-18 20:38 GMT+03:00 Matija Kejžar :
>>
>> Let’s say I create a table using Ignite DDL:
>>
>>
>>
>> CREATE TABLE IF NOT EXISTS M_WORK_PLAN (
>>   entity_id   VARCHAR(36),
>>   entity_version  INTEGER,
>>   owner_idVARCHAR(36),
>>   materialisation_version VARCHAR(20),
>>   ehr_id  VARCHAR(36),
>>   materialisation_timeTIMESTAMP,
>>   activation_time TIMESTAMP,
>>   PRIMARY KEY (ehr_id, owner_id, entity_id)
>> ) WITH
>> "template=partitioned,affinityKey=ehr_id,backups=1,atomicity=transactional,cache_name=M_WORK_PLAN,key_type=WorkPlanKey,value_type=MWorkPlan";
>>
>>
>>
>> This creates a backing cache called M_WORK_PLAN. So far, so good.
>>
>>
>>
>> But if I then do a put into this cache through the Cache API (instead of
>> using an SQL insert) like this:
>>
>>
>>
>> ignite.cache(“M_WORK_PLAN”).put(keyFromWP(mWorkPlan), mWorkPlan);
>>
>>
>>
>> After which I then do an SQL query:
>>
>>
>>
>> select * from M_WORK_PLAN
>>
>>
>>
>> which returns an empty result set, 0 work plans. Now is this correct
>> behaviour? Should not the cache.put() statement effectively result in a new
>> entry in the M_WORK_PLAN table or am I misunderstanding what is going on
>> here?
>>
>>
>>
>> Furthermore, should not an SQL insert result in a cache.get() method on
>> the M_WORK_PLAN cache returning a matching entry from the table?
>>
>>
>>
>> Thanks,
>>
>>
>>
>> M.
>
>



-- 
Thanks & Regards,
Naveen Bandaru


Re: Data lose in query

2017-12-12 Thread Naveen Kumar
Exactly, I have faced the same problem and posted this question to the
forum, not yet got any response.

Thanks

On 12-Dec-2017 1:48 PM, "Ahmad Al-Masry"  wrote:

> Hi;
> I added @AffinityKeyMapped to the fields in the model generated by web
> console.
> But I also noticed that when I load the data using the load
> commands generated from the web console, the data is not collocated and the
> query will return reduced data.
> Do you have any hints?
> BR
>
>
> On Dec 11, 2017, at 3:12 PM, Nikolai Tikhonov 
> wrote:
>
> It depends from your data model and can't be enabled via one property.
> Please, look at the following documentation pages:
>
> https://apacheignite.readme.io/docs/affinity-collocation
> https://apacheignite-sql.readme.io/docs/distributed-joins#collocated-joins
>
> On Mon, Dec 11, 2017 at 4:02 PM, Ahmad Al-Masry  wrote:
>
>> How can I enable this on the server configuration XML?
>> BR
>>
>>
>> On Dec 11, 2017, at 2:31 PM, Nikolai Tikhonov 
>> wrote:
>>
>> Hi,
>>
>> Strongly recommend to care about collocation of your data (as above
>> suggested by Vlad) instead of enable DistributedJoins flag. The performance
>> of this type of joins is worse then the performance of the affinity
>> collocation based joins due to the fact that there will be much more
>> network round-trips and data movement between the nodes to fulfill a query
>> [1].
>>
>> 1. https://apacheignite-sql.readme.io/docs/distributed-joins
>> #non-collocated-joins
>>
>>
>> On Mon, Dec 11, 2017 at 3:03 PM, Ahmad Al-Masry  wrote:
>>
>>> Hi;
>>> When I enabled the distributed JOIN, get the following Exception:
>>>
>>> java.sql.SQLException: javax.cache.CacheException: Failed to prepare
>>> distributed join query: join condition does not use index
>>> [joinedCache=PositionTypeCache
>>>
>>> Should I remove the indexes before doing distributed joins?
>>> BR
>>>
>>>
>>> On Dec 11, 2017, at 10:43 AM, Vladislav Pyatkov 
>>> wrote:
>>>
>>> Hi,
>>>
>>> When you use JOIN, you should to enable DistributedJoins flag[1], or
>>> tack care about collocated of each joined entry[2].
>>>
>>> [1]: org.apache.ignite.cache.query.SqlFieldsQuery#setDistributedJoins
>>> [2]: https://apacheignite.readme.io/docs
>>>
>>> On Mon, Dec 11, 2017 at 11:36 AM, Ahmad Al-Masry 
>>> wrote:
>>>
 Dears;
 The when I execute the attached query on Mysql data source or on a
 single node ignite, it returns about 25k records.
 When multiple node, it gives me about 3500 records.
 The caches are atomic and partitioned.
 Any suggestions.
 BR

 --



 This email, and the content it contains, are intended only for the
 persons
 or entities to which it is addressed. It may contain sensitive,
 confidential and/or privileged material. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon,
 this information by persons or entities other than the intended
 recipient(s) is prohibited. If you received this email in error, please
 immediately contact security[at]harri[dot]com and delete it from any
 device
 or system on which it may be stored.

>>>
>>>
>>>
>>> --
>>> Vladislav Pyatkov
>>>
>>>
>>>
>>>
>>> This email, and the content it contains, are intended only for the
>>> persons or entities to which it is addressed. It may contain sensitive,
>>> confidential and/or privileged material. Any review, retransmission,
>>> dissemination or other use of, or taking of any action in reliance upon,
>>> this information by persons or entities other than the intended
>>> recipient(s) is prohibited. If you received this email in error, please
>>> immediately contact security[at]harri[dot]com and delete it from any device
>>> or system on which it may be stored.
>>>
>>
>>
>>
>>
>> This email, and the content it contains, are intended only for the
>> persons or entities to which it is addressed. It may contain sensitive,
>> confidential and/or privileged material. Any review, retransmission,
>> dissemination or other use of, or taking of any action in reliance upon,
>> this information by persons or entities other than the intended
>> recipient(s) is prohibited. If you received this email in error, please
>> immediately contact security[at]harri[dot]com and delete it from any device
>> or system on which it may be stored.
>>
>
>
>
>
> This email, and the content it contains, are intended only for the persons
> or entities to which it is addressed. It may contain sensitive,
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended
> recipient(s) is prohibited. If you received this email in error, please
> immediately contact security[at]harri[dot]com and delete it from any device
> or 

Re: Cache store class not found exception

2017-12-11 Thread Naveen Kumar
Please make sure

class is on the server Ignite's CLASSPATH. OR You can just deploy the
JAR to $IGNITE_HOME/libs/user direcgtory

This should resolve

On Mon, Dec 11, 2017 at 8:52 PM, Mikael  wrote:
> Hi!
>
> I have a cache in a server node that is using a custom cache store for a
> JDBC database, when I connect a client node (running inside a
> Payara/Glassfish web application) to that server node I get a:
>
> class org.apache.ignite.IgniteCheckedException: Failed to find class with
> given class loader for unmarshalling (make sure same versions of all classes
> are available on all nodes or enable peer-class-loading)
> [clsLdr=WebappClassLoader (delegate=true; repositories=WEB-INF/classes/),
> cls=my_cache_store_class]
>
> And sure, that class is not there, but it's a client so the cache should not
> be there either and the cache store will not work in the client because
> there is no database there, so the question is if I am doing something wrong
> or should it be like that ? do I need to put the class in the client ? it
> will have references to other classes that are not there either so if it
> tries to unmarshal the cache store in the client it will not be a good idea.
>
>



-- 
Thanks & Regards,
Naveen Bandaru


Re: Index not getting created

2017-11-30 Thread Naveen Kumar
Hi


Here is the node logs captured with -v option.


[22:56:41,291][SEVERE][client-connector-#618%IgnitePOC%][JdbcRequestHandler]
Failed to execute SQL query [reqId=0, req=JdbcQueryExecuteRequest
[schemaName=PUBLIC, pageSize=1024, maxRows=0, sqlQry=CREATE INDEX
idx_customer_accountId ON "Customer".CUSTOMER (ACCOUNT_ID_LIST),
args=[], stmtType=ANY_STATEMENT_TYPE]]

class org.apache.ignite.internal.processors.query.IgniteSQLException:
Cache doesn't exist: Customer

at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.convert(DdlStatementsProcessor.java:343)

at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.runDdlStatement(DdlStatementsProcessor.java:287)

at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1466)

at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1966)

at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1962)

at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)

at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2445)

at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1971)

at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:305)

at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:164)

at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:137)

at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:39)

at 
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)

at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)

at 
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)

at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)

at 
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)



Select query works fine

0: jdbc:ignite:thin://127.0.0.1> select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';

++

|ACCOUNT_ID_LIST |

++

| A10001 |

++

1 row selected (1.342 seconds)


Create index query failed with the below error

0: jdbc:ignite:thin://127.0.0.1> CREATE INDEX idx_customer_accountId
ON "Customer".CUSTOMER (ACCOUNT_ID_LIST);

Error: Cache doesn't exist: Customer (state=5,code=0)

java.sql.SQLException: Cache doesn't exist: Customer

at 
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)

at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)

at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)

at sqlline.Commands.execute(Commands.java:823)

at sqlline.Commands.sql(Commands.java:733)

at sqlline.SqlLine.dispatch(SqlLine.java:795)

at sqlline.SqlLine.begin(SqlLine.java:668)

at sqlline.SqlLine.start(SqlLine.java:373)

at sqlline.SqlLine.main(SqlLine.java:265)


selectc query works fine even after issuing the create index query
which is failed

0: jdbc:ignite:thin://127.0.0.1> select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';

++

|ACCOUNT_ID_LIST |

++

| A10001 |

++

1 row selected (1.641 seconds)

0: jdbc:ignite:thin://127.0.0.1>

On Thu, Nov 30, 2017 at 9:04 PM, Taras Ledkov  wrote:
> Hi,
>
> I cannot reproduce the issue with described steps.
> Please check that the cache wasn't destroyed on the server.
>
> i.e. please execute SELECT query again after failed CREATE INDEX.
>
>
>
> On 30.11.2017 11:45, Naveen wrote:
>>
>> Has anyone got a chance to look into into this issue where I am trying to
>> create an index, but its throwing an error saying cache does not exist
>>
>> 0: jdbc:ignite:thin://127.0.0.1>  select ACCOUNT_ID_LIST from
>> "Customer".CUSTOMER 

Re: Enabling REST api for a apache client node

2017-11-29 Thread Naveen Kumar
My understanding with some other in-memory product is
We have seeders and leeches, seeders are the one hold data  and leeches are
the one which are exposed to the clients, responsible for processing the
incoming requests. Basis idea was to offload the connection/disconnection
activities from the seeders since seeders are already heavily loaded with
data processing etc.
So I was trying to correlate the same here, seeders are nothing but data
nodes in ignite and leeches are client nodes, so whemever a consumer is
trying to connectt to the cluster it will only establish a connection with
ignite Client node.
My understanding with ignite Client node may be wrong.


On 29-Nov-2017 10:24 PM, "slava.koptilin"  wrote:

> Hi Naveen,
>
> It seems there is no such XML property.
> Anyway, I don't think there is much sense to start REST server on a client
> node because it is always better to access server node directly if
> possible.
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Index not getting created

2017-11-29 Thread Naveen Kumar
AM using 2.3

What could be the issue with below create index command.

: jdbc:ignite:thin://127.0.0.1>  select * from "Customer".CUSTOMER
where ACCOUNT_ID_LIST ='A10001';
+++++---+
|ACCOUNT_ID_LIST |  CUST_ADDRESS_ID_LIST  |
   PARTYROLE|   PARTY_STATUS_CODE|
  REFREE_ID   |
+++++---+
| A10001 | custAddressIdList1 |
partyrole1 | partyStatusCode1   |
refreeId1 |
+++++---+
1 row selected (3.324 seconds)
0: jdbc:ignite:thin://127.0.0.1>  select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';
++
|ACCOUNT_ID_LIST |
++
| A10001 |
++
1 row selected (2.078 seconds)
0: jdbc:ignite:thin://127.0.0.1> CREATE INDEX idx_customer_accountId
ON "Customer".CUSTOMER (ACCOUNT_ID_LIST);
Error: Cache doesn't exist: Customer (state=5,code=0)
java.sql.SQLException: Cache doesn't exist: Customer
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
at sqlline.Commands.execute(Commands.java:823)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
0: jdbc:ignite:thin://127.0.0.1>

-- 
Thanks & Regards,
Naveen Bandaru