Re: Using native persistence to "extend" memory

2020-06-16 Thread steve.hostettler
Thank you both for clarifying this. I usually also have large objects 3KB so
I thought about increasing the page size to 32KB to reduce the number of
pages and thus reduce the speed at which we get to the 2/3 of dirty pages.
Good idea?
On top of that during the process at some I generate a significant amount of
new objects so I also kept  increased checkpointPageBufferSize to 4GB. I
also writeThrottlingEnabled disabled.

Thanks for advising.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite persistence and activation

2020-06-16 Thread Evgenii Zhuravlev
Hi,

All caches, including caches for atomic structures and in-memory caches,
are not available before activation. I believe it makes sense to move your
code for running after the activation event:
https://apacheignite.readme.io/docs/baseline-topology#cluster-activationdeactivation-events
.

Evgenii

чт, 11 июн. 2020 г. в 05:18, steve.hostettler :

> Hello.
>
> I am trying to implement ignite persistence but I stumbled upon the
> following problems/questions. It is required to activate the cluster, that
> much is clear but I have bootstrap code that is using technical caches that
> I do not want to persist and more problematic I need to use
> ignite.atomicReference and that as part of the initialization of the node.
>
> I assume that I need to create a another region that is not persisted for
> the so called system caches but what do I do with  ignite.atomicReference?
>
>
> Thanks in advance
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Data streamer hangs

2020-06-16 Thread yangjiajun
Hello.I use SET STREAMING ON and SET STREAMING OFF style to flush data to
ignite.Ignite sometimes hangs on SET STREAMING OFF.
The thread hangs on client side looks like:
 daemon prio=5 os_prio=0 tid=0x7fdba0003000 nid=0x3c1 waiting on
condition [0x7fdf473f1000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.executeBatch(JdbcThinConnection.java:1280)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.close0(JdbcThinConnection.java:1335)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.close(JdbcThinConnection.java:1325)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.executeNative(JdbcThinConnection.java:253)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:206)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:559)
at
com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)
at
com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
...
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

The thread hangs on server side looks like:
"data-streamer-stripe-21-#150" #178 prio=5 os_prio=0 tid=0x7f29b3737000
nid=0x37c36 waiting on condition [0x7f041d1d]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at
org.apache.ignite.internal.util.StripedExecutor$StripeConcurrentQueue.take(StripedExecutor.java:730)
at
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:541)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)

   Locked ownable synchronizers:
- None

I do not see any error in ignite log and I do not find any dead locks.

Do u have any ideas about such situation?Thanks!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using native persistence to "extend" memory

2020-06-16 Thread Denis Magda
Evgeniy,

Thanks for clarifying, I completely forgot about this behavior!

-
Denis


On Tue, Jun 16, 2020 at 5:10 PM Evgenii Zhuravlev 
wrote:

> Steve,
>
> Actually, disabling WAL is a good option for your use case. Checkpoint
> mechanism is the same with disabled WAL, the only difference is that node
> is not writing WAL to the disk on each operation. Usually, it might make
> sense to disable WAL for initial loading - when you can lose the data in
> case of failure and start data loading again. Four your use case, if you
> don't care about restore, you can just disable it:
> https://www.gridgain.com/docs/latest/developers-guide/persistence/native-persistence#disabling-wal
>
> Best Regards,
> Evgenii
>
>
>
> вт, 16 июн. 2020 г. в 17:02, Denis Magda :
>
>> Steve,
>>
>> Please check these generic recommendations if you haven't done so
>> already:
>> https://apacheignite.readme.io/docs/durable-memory-tuning#native-persistence-related-tuning
>>
>> Otherwise, send us a note if you come across any bottlenecks or issues so
>> that we can give you more specific recommendations.
>>
>> -
>> Denis
>>
>>
>> On Tue, Jun 16, 2020 at 3:25 PM steve.hostettler <
>> steve.hostett...@gmail.com> wrote:
>>
>>> Thanks a lot for the recommendation. So keeping the WAL, disabling
>>> archiving.
>>> I understand all records are kept on disk.
>>>
>>> Thanks again. Anything else?
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: Using native persistence to "extend" memory

2020-06-16 Thread Evgenii Zhuravlev
Steve,

Actually, disabling WAL is a good option for your use case. Checkpoint
mechanism is the same with disabled WAL, the only difference is that node
is not writing WAL to the disk on each operation. Usually, it might make
sense to disable WAL for initial loading - when you can lose the data in
case of failure and start data loading again. Four your use case, if you
don't care about restore, you can just disable it:
https://www.gridgain.com/docs/latest/developers-guide/persistence/native-persistence#disabling-wal

Best Regards,
Evgenii



вт, 16 июн. 2020 г. в 17:02, Denis Magda :

> Steve,
>
> Please check these generic recommendations if you haven't done so already:
> https://apacheignite.readme.io/docs/durable-memory-tuning#native-persistence-related-tuning
>
> Otherwise, send us a note if you come across any bottlenecks or issues so
> that we can give you more specific recommendations.
>
> -
> Denis
>
>
> On Tue, Jun 16, 2020 at 3:25 PM steve.hostettler <
> steve.hostett...@gmail.com> wrote:
>
>> Thanks a lot for the recommendation. So keeping the WAL, disabling
>> archiving.
>> I understand all records are kept on disk.
>>
>> Thanks again. Anything else?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Using native persistence to "extend" memory

2020-06-16 Thread Denis Magda
Steve,

Please check these generic recommendations if you haven't done so already:
https://apacheignite.readme.io/docs/durable-memory-tuning#native-persistence-related-tuning

Otherwise, send us a note if you come across any bottlenecks or issues so
that we can give you more specific recommendations.

-
Denis


On Tue, Jun 16, 2020 at 3:25 PM steve.hostettler 
wrote:

> Thanks a lot for the recommendation. So keeping the WAL, disabling
> archiving.
> I understand all records are kept on disk.
>
> Thanks again. Anything else?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Using native persistence to "extend" memory

2020-06-16 Thread steve.hostettler
Thanks a lot for the recommendation. So keeping the WAL, disabling archiving.
I understand all records are kept on disk. 

Thanks again. Anything else?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using native persistence to "extend" memory

2020-06-16 Thread Denis Magda
Hi Steve,

I think that you can get a performance hit for your write operations by
disabling the WAL. It serves two purposes in Ignite. The first is the
recovery while the second is fast disk writes. The WAL is an append-only
structure and Ignite can persist changes to disk really quick. If the WAL
is disabled then it will take more time to update persistence files on disk.

Just in case, if you missed that point, once the persistence is enabled
Ignite will keep 100% of records on disk. It's not like a OS swap process
that kicks in only when you're running out of memory space. Usually, we
recommend disabling the WAL during a loading phase:
https://apacheignite.readme.io/docs/write-ahead-log#section-wal-activation-and-deactivation

In general, I would certainly keep the WAL enabled but disabled the WAL
archiving if the recovery is not a big deal for you:
https://apacheignite.readme.io/docs/write-ahead-log#disabling-wal-archiving

-
Denis


On Tue, Jun 16, 2020 at 10:50 AM steve.hostettler <
steve.hostett...@gmail.com> wrote:

> Hello,
>
> I am trying to use ignite persistence to "extend" memory. That is to
> replace
> the swap that is not working very well.
>
> Therefore since I do not care about recovery I disabled the WAL. Are there
> other things you would recommend to configure to use the ignite persistence
> as a sort of swap. For instance only persisting the less used pages most of
> the time.
>
> Thanks
>
> Steve
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Using native persistence to "extend" memory

2020-06-16 Thread steve.hostettler
Hello,

I am trying to use ignite persistence to "extend" memory. That is to replace
the swap that is not working very well.

Therefore since I do not care about recovery I disabled the WAL. Are there
other things you would recommend to configure to use the ignite persistence
as a sort of swap. For instance only persisting the less used pages most of
the time.

Thanks

Steve



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Webinar: Apache Ignite Management & Monitoring

2020-06-16 Thread Greg Stachnick
Hi Everyone,

We released a new monitoring and management tool this month for Apache
Ignite and GridGain applications; GridGain Control Center. There is a
webinar schedule for tomorrow, JUNE 17 2020 10:00AM PDT (-07:00), where I
will provide an overview of the features and demo different use cases. If
you are interested, the registration link can be found below:

https://www.gridgain.com/resources/webinars/simplifying-gridgain-and-apache-ignite-management-with-the-gridgain-control-center

If you aren't able to attend, a replay will also be posted shortly after
the session.

Hope to see you there.
Thanks,
Greg

-- 
Greg Stachnick
GridGain Systems


PIVOT with SQL

2020-06-16 Thread narges saleh
Hi All,
Is it possible to use pivot function with JDBC SQL used with apache ignite?
If not, is there a function available with such a functionality?
thanks


Re: Multi-threaded JDBC driver issue/concern

2020-06-16 Thread adipro


This is my SQL table schema

ID (Long), URL (Varchar), SCORE (Double), APPNAME_ID (Long)

We have composite index on Score, Appname_Id.

Based on your answer I've two questions.

1. How can I insert SQL rows using JCache data streamer API (if possible,
with example)? Currently I'm using jdbc thin with STREAMING ON. But issue is
mentioned above.
2. Each row data is -> ID (Long), URL (Varchar), SCORE (Double), APPNAME_ID
(Long). How this data is stored as Key-Value? I mean what will be the key
and what will be the value?

Can you please answer these two questions?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Multi-threaded JDBC driver issue/concern

2020-06-16 Thread adipro
Hi stephen. Thanks so much for reply.

This is my SQL table schema

ID (Long), URL (Varchar), SCORE (Double), APPNAME_ID (Long)

We have composite index on Score, Appname_Id.

Based on your answer I've two questions.

1. How can I insert SQL rows using JCache data streamer API (if possible,
with example)? Currently I'm using jdbc thin with STREAMING ON. But issue is
mentioned above.
2. Each row data is -> ID (Long), URL (Varchar), SCORE (Double), APPNAME_ID
(Long). How this data is stored as Key-Value? I mean what will be the key
and what will be the value?

Can you please answer these two questions?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Multi-threaded JDBC driver issue/concern

2020-06-16 Thread adipro
Hi stephen. Thanks so much for reply.

This is my SQL table schema

ID (Long), URL (Varchar), SCORE (Double), APPNAME_ID (Long)

We have composite index on Score, Appname_Id.

Based on your answer I've two questions.

1. How can I insert SQL rows using JCache data streamer API (if possible,
with example)? Currently I'm using jdbc thin with STREAMING ON. But issue is
mentioned above.
2. Each row data is -> ID (Long), URL (Varchar), SCORE (Double), APPNAME_ID
(Long). How this data is stored as Key-Value? I mean what will be the key
and what will be the value?

Can you please answer these two questions?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteSpringBean IgniteCheckedException

2020-06-16 Thread Denis Magda
It looks like you copied this configuration template from this
documentation file:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteSpringBean.html

It assumes that an Ignite instance will be started separately and then
passed into this configuration file via "mySpringBean".

Are you sure you need to go this path? Most likely, you can configure
Ignite with a Spring XML file or programmatically in Java and then start a
cluster node using the configuration. Check this Ignite hello world app
that shows how to achieve this programmatically:
https://apacheignite.readme.io/docs/getting-started#adding-ignitehelloworld

This tutorial shows how to work with Ignite from a standard Spring
application that goes with Controllers, Repositories and Services:
https://www.gridgain.com/docs/tutorials/spring/spring_ignite_tutorial

-
Denis


On Tue, Jun 16, 2020 at 12:46 AM kay bae  wrote:

> Hello, I start node using command
>
> ' sh ignite.sh ./config/config.xml &  '
>
> It worked well before I added IgniteSpringBean.
>
> After I add this code,
>
>
> 
>  
>   class="org.apache.ignite.configuration.IgniteConfiguration">
>  
>  
>  
>  
>
>
>
> 2020-06-16T16:20:38,429][INFO ][main][G] Node started : [stage="Configure
> system pool" (53 ms),stage="Start managers" (187 ms),stage="Configure
> binary metadata" (41 ms),stage="Start processors" (623 ms),stage="Init
> metastore" (9 ms),stage="Finish recovery" (0 ms),stage="Join topology" (170
> ms),stage="Await transition" (14 ms),stage="Await exchange" (379
> ms),stage="Total time" (1476 ms)]
>  class org.apache.ignite.IgniteException: Failed to find configuration in:
> file:./config/config.xml
>  at
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1067)
>  at org.apache.ignite.Ignition.start(Ignition.java:349)
>  at
> org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300)
>  Caused by: class org.apache.ignite.IgniteCheckedException: Failed to find
> configuration in: file:./config/config.xml
>  at
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:116)
>  at
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
>  at
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:710)
>  at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:911)
>  at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820)
>  at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
>  at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659)
>  at org.apache.ignite.Ignition.start(Ignition.java:346)
>  ... 1 more
>  Failed to start grid: Failed to find configuration in:
> file:./config/config.xml
>
>
>
> Node started and failed.
>
> Is there any set up to use IgniteSpringBean??
>
> Thank you
>


Re: Apache Ignite with Spring framework

2020-06-16 Thread Denis Magda
It depends on what you assume under a spring bean.

If that's an entity of a Spring framework such as Controller, Repository or
Entity, then you don't need to register anything with Ignite. Use those
Spring annotations throughout your application and have your Controllers &
Repositories access an Ignite cluster. Check this tutorial that shows how
to build a RESTful service with Spring Boot, Spring Data and Apache Ignite:
https://www.gridgain.com/docs/tutorials/spring/spring_ignite_tutorial

-
Denis


On Tue, Jun 16, 2020 at 7:51 AM kay  wrote:

> Hello,
> Does the ignite operate on a spring framework basis?
> Can I register a spring controller in classpath at server remote node and
> use it?(using component , like @Controller)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Apache Ignite with Spring framework

2020-06-16 Thread kay
Hello, 
Does the ignite operate on a spring framework basis? 
Can I register a spring controller in classpath at server remote node and
use it?(using component , like @Controller)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite with Spring framework

2020-06-16 Thread kay
Hello, 
I apologize for sent a lot of e-mails with the same content.
I also show your answer on Stack Overflow.

I just configure start in xml file like this 
 
 
  

and There are caches and dataregions.

 I execute ignite.sh config.xml & 

What should I configure for registering spring bean??

Thank you so much








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Multi-threaded JDBC driver issue/concern

2020-06-16 Thread Stephen Darlington
There’s not one, right way of doing it. In Java it’s something like this.

Define your classes:

public class AppDetailsKey {
@QuerySqlField
private Long id;

public AppDetailsKey(Long id) {
this.id = id;
}
}

public class AppDetails {
@QuerySqlField
private String url;
@QuerySqlField
private Double score;
@QuerySqlField
private Long app_name;

public AppDetails(String url, Double score, Long app_name) {
this.url = url;
this.score = score;
this.app_name = app_name;
}
}

(I didn’t define your secondary index but you can do that with the annotations, 
too.)

Create your cache:

CacheConfiguration cacheConfiguration = new 
CacheConfiguration<>();
cacheConfiguration.setSqlSchema("PUBLIC")
.setName("APPDETAILS")
.setIndexedTypes(AppDetailsKey.class, AppDetails.class);

IgniteCache cache = 
ignite.getOrCreateCache(cacheConfiguration);

The annotations and the IndexedTypes tell Ignite to make it available to the 
SQL engine.

And then insert stuff into it:

IgniteDataStreamer ds = 
ignite.dataStreamer("APPDETAILS");
ds.addData(new AppDetailsKey(1L), new AppDetails("localhost", 1.0, 10L));
ds.addData(new AppDetailsKey(2L), new AppDetails("localhost", 1.0, 10L));
ds.addData(new AppDetailsKey(3L), new AppDetails("localhost", 1.0, 10L));
ds.flush();

> On 16 Jun 2020, at 06:35, R S S Aditya Harish  > wrote:
> 
> This is my SQL table schema
> 
> ID (Long), URL (Varchar), SCORE (Double), APPNAME_ID (Long)
> 
> We have a composite index on Score, Appname_Id.
> 
> Based on your answer I've two questions.
> 
> 1. How can I insert SQL rows using JCache data streamer API (if possible, 
> with example)? Currently, I'm using jdbc thin with STREAMING ON. But the 
> issue is mentioned above.
> 2. Each row data is -> ID (Long), URL (Varchar), SCORE (Double), APPNAME_ID 
> (Long). How this data is stored as Key-Value? I mean what will be the key and 
> what will be the value?
> 
> Can you please answer these two questions?
> 
> 
>  On Mon, 15 Jun 2020 21:44:38 +0530 Stephen Darlington 
> mailto:stephen.darling...@gridgain.com>> 
> wrote 
> 
> Do you need the sorting as part of the loading process? If not, the best 
> route would be to use the data streamer to load the data. You can still use 
> the SQL engine and access your sorted data afterwards — remember that SQL and 
> key-value are two different ways of accessing the same underlying data. 
> 
> > On 15 Jun 2020, at 15:46, adipro  > > wrote: 
> > 
> > We have an SQL table which we need because for normal JCache K-V we cannot 
> > sort on some column's data. We need that sort feature. That's why we chose 
> > SQL table representation. 
> > 
> > Our application is heavily multi-threaded. 
> > 
> > Now when trying to insert rows in that table, each thread simultaneously 
> > sends 5000-1 rows in bulk. Now if we use, SqlFieldsQuery, it's taking 
> > so 
> > much of time as we cannot do it in bulk and have to do it in loop one by 
> > one. 
> > 
> > For this case, we are using JDBC thin driver. 
> > 
> > But since it's multi-threaded we can't use single connection to execute in 
> > parallel as it is not thread safe. 
> > 
> > So, what we did is, we added a synchronisation block which contains the 
> > insertion of those rows in bulk using thin driver. The query performance is 
> > good, but so many threads are in wait state as this is happening. 
> > 
> > Can someone please suggest any idea on how to insert those many rows in 
> > bulk 
> > efficiently without threads waiting for so much time to use JDBC 
> > connection. 
> > 
> > 
> > 
> > -- 
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ 
> >  
> 
> 
> 
> 




How to fix Ignite node segmentation without restart

2020-06-16 Thread Actarus
Hello,

I'm running Apache Ignite (2.4.0) embedded into a java application that runs
in a master/slave architecture. This means that there are only ever two
nodes in a grid, in FULL_SYNC, REPLICATED mode. Only the master application
writes to the grid, the slave only reads from it when it gets promoted to
master on a failover.

In such an architecture, network segmentation issues mean different things.
Typically I see that for handling segmentation, the node that experienced
the issue would need to be restarted. However in this scenario if the master
is segmented, I do not want to restart it and I cannot do a failover because
a network issue just happened and the stand-by may be invalid. The fix is to
always restart the slave.

However I notice that regardless of handling the EVT_NODE_SEGMENTED event,
adding a SegmentationProcess, running with SegmentationPolicy.NOOP and
having a segmentation plugin and always returning true/OK, I find that the
node that runs in master always remains in segmented state, and it is
impossible for it to re-join a cluster after restarting the slave node.

Is there some mechanism I can use to tell the node within my master process
to completely ignore segmentation? Or tell it that it is fine so that
discovery can still happen after I restart the slave node? Currently I used
port  with TcpDiscoverySpi with hard-coded addresses (master and slave
IP addresses). When the master node is segmented (by simulating network
issues on the command-line) it appears there's no way for the discovery to
recover - port  is shut down, and the slave node always comes up blind
to the master.

I would appreciate any insights on this issue. Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite work dir

2020-06-16 Thread Ilya Kasnacheev
Hello!

Try setting IgniteConfiguration.workDirectory to some writable path.

Regards,
-- 
Ilya Kasnacheev


чт, 11 июн. 2020 г. в 18:58, narges saleh :

> Hi Ilya,
>  IGNITE_HOME is /opt/ignite and in ignite-log4j, the log file set to
>  
> I don't have any issue on the server side.
>
> This is the exception I get on the client side:
>
> Caused by: class org.apache.ignite.IgniteCheckedException: Work directory
> does not exist and cannot be created: /ignite/work
> at
> org.apache.ignite.internal.util.IgniteUtils.workDirectory(IgniteUtils.java:9440)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.initializeConfiguration(IgnitionEx.java:2181)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1697)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:615)
> at
> org.apache.ignite.internal.jdbc2.JdbcConnection.getIgnite(JdbcConnection.java:311)
> at
> org.apache.ignite.internal.jdbc2.JdbcConnection.(JdbcConnection.java:240)
> ... 39 more
>
> On Thu, Jun 11, 2020 at 10:04 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> I recommend setting the property workDirectory of IgniteConfiguration
>> class to desired value. And leaving IGNITE_HOME and IGNITE_WORK_DIR not
>> specified.
>>
>> This may still cause exception if your e.g. logging subsystem is
>> configured to use /ignite/work. Need to see the exception.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 11 июн. 2020 г. в 17:55, narges saleh :
>>
>>> Hi All,
>>>
>>> What's the best way to set ignite's work dir on the client side,
>>> considering that the client and server nodes run under different users.
>>> I am setting both IGNITE_HOME and IGNITE_WORK_DIR system variables (on
>>> client), but I  am still getting the exception "/ignite/work" cannot be
>>> created.
>>>
>>> thanks.
>>>
>>


Re: Apache Ignite with Spring framework

2020-06-16 Thread Ilya Kasnacheev
Hello!

You have already sent a lot of e-mails with the same content.

Please avoid sending more of these.

As I have told you on Stack Overflow, Ignite is not a distributed Spring :)
it will not register any spring components, but can re-use spring context
and make beans from it available to local compute tasks, etc.

Regards,
-- 
Ilya Kasnacheev


вт, 16 июн. 2020 г. в 16:11, kay :

> Hello,
>
> Does the Apache Ignite operate on a spring framework basis?
>
> Can I register a spring controller in classpath at server remote node and
> use it?
> (using component scan, like @Controller)
>
> Thank you
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Apache Ignite with Spring framework

2020-06-16 Thread kay
Hello,

Does the Apache Ignite operate on a spring framework basis? 

Can I register a spring controller in classpath at server remote node and
use it?
(using component scan, like @Controller)

Thank you



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Apache Ignite with Spring framework

2020-06-16 Thread kay
Does the ignite operate on a spring framework basis? 
Can I register a spring controller in classpath at server node and use
it?(using component scan, like @Controller)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Apache Ignite operation

2020-06-16 Thread kay
Hello,

Does the Apache Ignite operate on a spring framework basis? 

Can I register a spring controller in classpath at server remote node and
use it?
(using component scan, like @Controller)

Thank you



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite with Spring Framework

2020-06-16 Thread kay
Hello, 
Does the ignite operate on a spring framework basis? 
Can I register a spring controller in classpath and use it?(using component
, like @Controller)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


IgniteSpringBean start failed

2020-06-16 Thread kay
Hello, I start node using

sh ignite.sh /ERP/Domains/CacheDomain/config/config.xml &  command 

It worked well before I added IgniteSpringBean.

After I add this code,



 
 
 
 
 
 



2020-06-16T16:20:38,429][INFO ][main][G] Node started : [stage="Configure
system pool" (53 ms),stage="Start managers" (187 ms),stage="Configure binary
metadata" (41 ms),stage="Start processors" (623 ms),stage="Init metastore"
(9 ms),stage="Finish recovery" (0 ms),stage="Join topology" (170
ms),stage="Await transition" (14 ms),stage="Await exchange" (379
ms),stage="Total time" (1476 ms)]
 class org.apache.ignite.IgniteException: Failed to find configuration in:
file:/ERP/Domains/CacheDomain/config/config.xml
 at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1067)
 at org.apache.ignite.Ignition.start(Ignition.java:349)
 at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300)
 Caused by: class org.apache.ignite.IgniteCheckedException: Failed to find
configuration in: file:/ERP/Domains/CacheDomain/config/config.xml
 at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:116)
 at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
 at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:710)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:911)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659)
 at org.apache.ignite.Ignition.start(Ignition.java:346)
 ... 1 more
 Failed to start grid: Failed to find configuration in:
file:/ERP/Domains/CacheDomain/config/config.xml



Node started and failed.

Is there any config set to use IgniteSpringBean??





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite with Spring Framework

2020-06-16 Thread kay
Hello, 
Does the ignite operate on a spring framework basis? 
Can I register a spring controller in classpath at server remote node and
use it?(using component , like @Controller)

Thank you



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Out of memory error in data region with persistence enabled

2020-06-16 Thread Raymond Wilson
Hi Alex,

Thanks for providing the additional detail on the checkpointing memory
requirements.

As you say, this is difficult to calculate statically, but it seems to
indicate that small data regions are bad (ie: susceptible to this issue),
so should be avoided.

Our current system really only has two data regions, one that supports
general long term data storage and access, and another one that supports
ingest. The reason for a data region supporting ingest is to act as a safe
buffer for inbound information until ingest processors process it, without
impacting mainline operations against the other data region. It sounds like
having multiple data regions may be an inefficient use of memory due to the
need to have sufficient free space to support checkpointing operations.
Would you recommend defining only a single persistent data region in the
general case?

It is interesting that the number of pages defined by page size has such a
large effect on how well checkpointing works in stressful work loads. Would
you recommend not using larger page sizes to permit more head room for
checkpointing?

It seems adding more memory will help as you suggest. However the relation
of 256*caches*partitions*CPUs means addition of more caches to support more
functionality, or scaling of infrastructure, risks crossing a boundary
where checkpointing is no longer 'safe' given the memory supplied to the
data region under the existing workloads.

Finally, the aspect that did surprise me is how final this failure mode is
in that the JVM logs the issue and then quits, which would give the support
team nightmares! Is it possible to have a more graceful degradation of this
functionality; ie: If a full checkpoint cannot be completed due to free
space restrictions, can a series of partial checkpoints be executed?

I look forward to your suggestions.

Thanks,
Raymond.



On Tue, Jun 16, 2020 at 10:22 PM Alex Plehanov 
wrote:

> Raymond,
>
> When a checkpoint is triggered you need to have some amount of free page
> slots in offheap to save metadata (for example free-lists metadata,
> partition counter gaps, etc). The number of required pages depends on count
> of caches, count of partitions, workload, and count of CPUs. In worst
> cases, you will need up to 256*caches*partitions*CPU count of pages only to
> store free-list buckets metadata. This number of pages can't be calculated
> statically, so the exact amount can't be reserved in advance. Currently,
> 1/4 of offheap memory is reserved for this purpose (when amount of dirty
> pages riches 3/4 of total number of pages checkpoint is triggered), but
> sometimes it's not enough.
>
> In your case, 64Mb data-region is allocated. Page size is 16Kb, so you
> have a total of about 4000 pages (real page size in offheap is a little bit
> bigger than configured page size). Checkpoint is triggered by "too many
> dirty page" event, so 3/4 of pages are already dirty. And only 1000 pages
> left to store metadata, it's too small. If page size is 4kb the amount of
> clean pages is 4000, so your reproducer can pass in some circumstances.
>
> Increase data region size to solve the problem.
>
>
> вт, 16 июн. 2020 г. в 05:39, Raymond Wilson :
>
>> I have spent some more time on the reproducer. It is now very simple and
>> reliably reproduces the issue with a simple loop adding slowly growing
>> entries into a cache with no continuous query ro filters. I have attached
>> the source files and the log I obtain when running it.
>>
>> Running from a clean slate (no existing persistent data) this reproducer
>> exhibits the out of memory error when adding an element 4150 bytes in size.
>>
>> I did find this SO article (
>> https://stackoverflow.com/questions/55937768/ignite-report-igniteoutofmemoryexception-out-of-memory-in-data-region)
>> that describes the same problem. The solution offered was to increase the
>> empty page pool size so it is larger than the biggest element being added.
>> The empty pool size should always be bigger than the largest element added
>> in the reproducer until the point of failure where 4150 bytes is the
>> largest size being added. I tried increasing it to 200, it made no
>> difference.
>>
>> The reproducer is using a pagesize of 16384 bytes.
>>
>> If I set the page size to the default 4096 bytes this reproducer does not
>> show the error up to the size limit of 1 bytes the reproducer tests.
>> If I set the page size to 8192 bytes the reproducer does reliably fail
>> with the error at the item with 6941 bytes.
>>
>> This feels like a bug in handling non-default page sizes. Would you
>> recommend switching from 16384 bytes to 4096 for our page size? The reason
>> I opted for the larger size is that we may have elements ranging in size
>> from 100's of bytes to 100Kb, and sometimes larger.
>>
>> Thanks,
>> Raymond.
>>
>>
>> On Thu, Jun 11, 2020 at 4:25 PM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> Just a correction to context of the data region running out of 

Re: Out of memory error in data region with persistence enabled

2020-06-16 Thread Alex Plehanov
Raymond,

When a checkpoint is triggered you need to have some amount of free page
slots in offheap to save metadata (for example free-lists metadata,
partition counter gaps, etc). The number of required pages depends on count
of caches, count of partitions, workload, and count of CPUs. In worst
cases, you will need up to 256*caches*partitions*CPU count of pages only to
store free-list buckets metadata. This number of pages can't be calculated
statically, so the exact amount can't be reserved in advance. Currently,
1/4 of offheap memory is reserved for this purpose (when amount of dirty
pages riches 3/4 of total number of pages checkpoint is triggered), but
sometimes it's not enough.

In your case, 64Mb data-region is allocated. Page size is 16Kb, so you have
a total of about 4000 pages (real page size in offheap is a little bit
bigger than configured page size). Checkpoint is triggered by "too many
dirty page" event, so 3/4 of pages are already dirty. And only 1000 pages
left to store metadata, it's too small. If page size is 4kb the amount of
clean pages is 4000, so your reproducer can pass in some circumstances.

Increase data region size to solve the problem.


вт, 16 июн. 2020 г. в 05:39, Raymond Wilson :

> I have spent some more time on the reproducer. It is now very simple and
> reliably reproduces the issue with a simple loop adding slowly growing
> entries into a cache with no continuous query ro filters. I have attached
> the source files and the log I obtain when running it.
>
> Running from a clean slate (no existing persistent data) this reproducer
> exhibits the out of memory error when adding an element 4150 bytes in size.
>
> I did find this SO article (
> https://stackoverflow.com/questions/55937768/ignite-report-igniteoutofmemoryexception-out-of-memory-in-data-region)
> that describes the same problem. The solution offered was to increase the
> empty page pool size so it is larger than the biggest element being added.
> The empty pool size should always be bigger than the largest element added
> in the reproducer until the point of failure where 4150 bytes is the
> largest size being added. I tried increasing it to 200, it made no
> difference.
>
> The reproducer is using a pagesize of 16384 bytes.
>
> If I set the page size to the default 4096 bytes this reproducer does not
> show the error up to the size limit of 1 bytes the reproducer tests.
> If I set the page size to 8192 bytes the reproducer does reliably fail
> with the error at the item with 6941 bytes.
>
> This feels like a bug in handling non-default page sizes. Would you
> recommend switching from 16384 bytes to 4096 for our page size? The reason
> I opted for the larger size is that we may have elements ranging in size
> from 100's of bytes to 100Kb, and sometimes larger.
>
> Thanks,
> Raymond.
>
>
> On Thu, Jun 11, 2020 at 4:25 PM Raymond Wilson 
> wrote:
>
>> Just a correction to context of the data region running out of memory:
>> This one does not have a queue of items or a continuous query operating on
>> a cache within it.
>>
>> Thanks,
>> Raymond.
>>
>> On Thu, Jun 11, 2020 at 4:12 PM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> Pavel,
>>>
>>> I have run into a different instance of a memory out of error in a data
>>> region in a different context from the one I wrote the reproducer for. In
>>> this case, there is an activity which queues items for processing at a
>>> point in the future and which does use a continuous query, however there is
>>> also significant vanilla put/get activity against a range of other caches..
>>>
>>> This data region was permitted to grow to 1Gb and has persistence
>>> enabled. We are now using Ignite 2.8
>>>
>>> I would like to understand if this is a possible failure mode given that
>>> the data region has persistence enabled. The underlying cause appears to be
>>> 'Unable to find a page for eviction'. Should this be expected on data
>>> regions with persistence?
>>>
>>> I have included the error below.
>>>
>>> This is the initial error reported by Ignite:
>>>
>>> 2020-06-11 12:53:35,082 [98] ERR [ImmutableCacheComputeServer] JVM will
>>> be halted immediately due to the failure: [failureCtx=FailureContext
>>> [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException:
>>> Failed to find a page for eviction [segmentCapacity=13612, loaded=5417,
>>> maxDirtyPages=4063, dirtyPages=5417, cpPages=0, pinnedInSegment=0,
>>> failedToPrepare=5417]
>>> Out of memory in data region [name=Default-Immutable, initSize=128.0
>>> MiB, maxSize=1.0 GiB, persistenceEnabled=true] Try the following:
>>>   ^-- Increase maximum off-heap memory size
>>> (DataRegionConfiguration.maxSize)
>>>   ^-- Enable Ignite persistence
>>> (DataRegionConfiguration.persistenceEnabled)
>>>   ^-- Enable eviction or expiration policies]]
>>>
>>> Following this error is a lock dump, where this is the only thread with
>>> a lock:(I am assuming the structureId member with the value
>>> 

IgniteSpringBean IgniteCheckedException

2020-06-16 Thread kay bae
Hello, I start node using command

' sh ignite.sh ./config/config.xml &  '

It worked well before I added IgniteSpringBean.

After I add this code,



 
 
 
 
 
 



2020-06-16T16:20:38,429][INFO ][main][G] Node started : [stage="Configure
system pool" (53 ms),stage="Start managers" (187 ms),stage="Configure
binary metadata" (41 ms),stage="Start processors" (623 ms),stage="Init
metastore" (9 ms),stage="Finish recovery" (0 ms),stage="Join topology" (170
ms),stage="Await transition" (14 ms),stage="Await exchange" (379
ms),stage="Total time" (1476 ms)]
 class org.apache.ignite.IgniteException: Failed to find configuration in:
file:./config/config.xml
 at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1067)
 at org.apache.ignite.Ignition.start(Ignition.java:349)
 at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300)
 Caused by: class org.apache.ignite.IgniteCheckedException: Failed to find
configuration in: file:./config/config.xml
 at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:116)
 at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
 at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:710)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:911)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659)
 at org.apache.ignite.Ignition.start(Ignition.java:346)
 ... 1 more
 Failed to start grid: Failed to find configuration in:
file:./config/config.xml



Node started and failed.

Is there any set up to use IgniteSpringBean??

Thank you


IgniteCheckedException

2020-06-16 Thread kay bae
Hello, I start node using command

' sh ignite.sh ./config/config.xml &  '

It worked well before I added IgniteSpringBean.

After I add this code,



 
 
 
 
 
 



2020-06-16T16:20:38,429][INFO ][main][G] Node started : [stage="Configure
system pool" (53 ms),stage="Start managers" (187 ms),stage="Configure
binary metadata" (41 ms),stage="Start processors" (623 ms),stage="Init
metastore" (9 ms),stage="Finish recovery" (0 ms),stage="Join topology" (170
ms),stage="Await transition" (14 ms),stage="Await exchange" (379
ms),stage="Total time" (1476 ms)]
 class org.apache.ignite.IgniteException: Failed to find configuration in:
file:./config/config.xml
 at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1067)
 at org.apache.ignite.Ignition.start(Ignition.java:349)
 at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300)
 Caused by: class org.apache.ignite.IgniteCheckedException: Failed to find
configuration in: file:./config/config.xml
 at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:116)
 at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
 at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:710)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:911)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659)
 at org.apache.ignite.Ignition.start(Ignition.java:346)
 ... 1 more
 Failed to start grid: Failed to find configuration in:
file:./config/config.xml



Node started and failed.

Is there any set up to use IgniteSpringBean??

Thank you