Re: expiry policy

2020-11-30 Thread akorensh
Hi,
  Expiry policies are designed to set the lifetime of an entry. 
  If you configure your data region to have a small max size, and load more
data than the max size of the data region, then most of your entries will be
kept in persistent storage. 
 After configuring your data region, set your expiry policy to be as needed,
and your entries will be deleted when necessary. 

 see:
https://www.gridgain.com/docs/latest/developers-guide/memory-configuration/data-regions
 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DataRegionConfiguration.html#setMaxSize-long-

 use the corresponding messages in the log to determine how much data is in
memory and what percent was committed on disk


Here everything is in memory because it is large enough to accommodate all
entries.
^-- Off-heap memory [used=30MB, free=99.17%, allocated=3447MB]
^--   default region [type=default, persistence=true, lazyAlloc=true,
  ...  initCfg=256MB, maxCfg=3247MB, usedRam=30MB, freeRam=99.07%,
allocRam=3247MB, allocTotal=29MB]
^-- Ignite persistence [used=29MB]

To understand what happens when Ignite hits the limits of a data region see
"page replacement" section here:
https://ignite.apache.org/docs/latest/memory-configuration/eviction-policies


Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


expiry policy

2020-11-30 Thread narges saleh
Hi All,

I understand that expiry policy applies to off heap cache along with the
persistent durable storage. Is there a way to apply different expiry policy
to cache and disk? I need to retain the data on disk a lot longer than I
can afford in cache.

thanks.


Re: Ignite cache TextQuery on JsonString data Field

2020-11-30 Thread siva
Hi,
please find the Cache configuration details give below.







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unixodbc currently not working...

2020-11-30 Thread Ilya Kasnacheev
Hello!

There may be some issues with ODBC driver but it is generally working and
stable. I'm not sure why you would need the ODBC_V3 specifically?

Regards,
-- 
Ilya Kasnacheev


пн, 30 нояб. 2020 г. в 14:48, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Quite simple. I'd like to execute SQL queries.
>
> As the thin client c++ interface which I'd like to use is not capable in
> executing SQL queries I have to use unixodbc as a temporary workaround.
>
> There are some other issues that popped up in the unixodbc driver from
> Ignite.
>
> Boolean and Double values are currently causing issues.
> Whenever I have a table column storing the value 12.3456 for example I'm
> getting 123456 back by using the interface.
>
> Boolean values are also an issue as the column table data type doesn't
> seem to be defined. I'm getting "-7" back which is definitely wrong ;-)
>
> Regards,
>
> Wolfgang
>
>
> Am 30.11.20 um 10:41 AM schrieb Ilya Kasnacheev:
> > Hello!
> >
> > Maybe the driver is not actually capable of ODBC_V3? Why do you need it?
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > пт, 27 нояб. 2020 г. в 19:15, Wolfgang Meyerle
> > mailto:wolfgang.meye...@googlemail.com
> >>:
> >
> > So,
> >
> > I uploaded a tiny demo project for my two issues:
> >
> > Issue1 states that the odbc interface is reporting it's not capable
> of
> > the ODBC_V3 standard.
> >
> > Issue2 is the one I described where I get linking problems despite
> that
> > the even if you uncomment #LIBS += -lodbcinst in the pro file of the
> QT
> > project.
> >
> > You can find everything here:
> > https://filebin.net/5fclxod62xi36gbb
> > 
> >
> > Regards,
> >
> > Wolfgang
> >
> > Am 27.11.20 um 4:21 PM schrieb Ilya Kasnacheev:
> >  > Hello!
> >  >
> >  > The workaround for third-party tools is probably
> >  > LD_PRELOAD=/path/to/libodbcinst.so isql -foo -bar
> >  >
> >  > Regards,
> >  > --
> >  > Ilya Kasnacheev
> >  >
> >  >
> >  > пт, 27 нояб. 2020 г. в 18:18, Igor Sapego  > 
> >  > >>:
> >  >
> >  > Hi,
> >  >
> >  > Starting from your last question, it's Version3.
> >  >
> >  > Now to the issue you are referring to. It definitely looks
> like a
> >  > bug to me. It's weird
> >  > that no one has found it earlier. Looks like no one
> > uses SQLConnect?
> >  > It is weird that
> >  > We do not have a test for that either. Anyway I filed a
> > ticket and
> >  > going to take a look
> >  > at it soon: [1]
> >  >
> >  > As a workaround you can try a solution suggested by Ilya. I
> > can not
> >  > provide a sound
> >  > workaround for third-party tools like isql though.
> >  >
> >  > [1] - https://issues.apache.org/jira/browse/IGNITE-13771
> > 
> >  >  > >
> >  >
> >  > Best Regards,
> >  > Igor
> >  >
> >  >
> >  > On Fri, Nov 27, 2020 at 5:43 PM Ilya Kasnacheev
> >  > mailto:ilya.kasnach...@gmail.com>
> >  > >> wrote:
> >  >
> >  > Hello!
> >  >
> >  > You can link your own binary to libodbcinst, in which
> > case the
> >  > linking problem should go away. Can you try that?
> >  >
> >  > Regards,
> >  > --
> >  > Ilya Kasnacheev
> >  >
> >  >
> >  > пт, 27 нояб. 2020 г. в 17:13, Wolfgang Meyerle
> >  >  > 
> >  >  > >>:
> >  >
> >  > Hi,
> >  >
> >  > after spending several hours to get the unixodbc
> > driver up
> >  > and running I
> >  > nearly gave up.
> >  >
> >  > However together with the author of unixodbc I was
> > able to
> >  > find out that
> >  > the current odbc driver in Apache Ignite is not
> > doeing what
> >  > it's
> >  > supposed to do.
> >  >
> >  > As soon as I execute the command:
> >  > et = SQLConnect(dbc, (SQLCHAR*)DSN, SQL_NTS,
> > (SQLCHAR*)"",
> >  > SQL_NTS,
> >  > (SQLCHAR*)"", SQL_NTS);
> >  >
> >  > I get a crash in my program stating that:
> >  > isql: symbol lookup error:
> > /usr/local/lib/libignite-odbc.so:
> >  >

Re: Ignite cache TextQuery on JsonString data Field

2020-11-30 Thread ibelyakov
Can you provide your CacheConfiguration?

Igor



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Native persistence enabled: Loading the same data again takes longer than initially

2020-11-30 Thread VincentCE
Hi Maxim!

we are generally using IgniteDataStreamers for cache loadings. Regarding
disabling wal: We already thought about that during initial loading but our
business business flow requires several further loadings and at least during
the second load we cannot switch it off anyway. Otherwise according to my
understanding we would risk losing all data e.g. due to a node restart if
wal is turned.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQLRowCount in unixodbc driver

2020-11-30 Thread ibelyakov
Hi Wolfgang,

According to the SQLRowCount function description, it shouldn't work for the
SELECT queries:
"SQLRowCount returns the number of rows affected by an UPDATE, INSERT, or
DELETE statement;"

More information can be found here:
https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlrowcount-function

Igor



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Native persistence enabled: Loading the same data again takes longer than initially

2020-11-30 Thread Maxim Muzafarov
Hello Vincent,


What kind of Ignites API you are using for initial data loading? I
always use this man page [1] for my local tests with simultaneously
disabling WALs if the Ignite native persistence enabled.


[1] https://apacheignite.readme.io/docs/data-loading

On Mon, 30 Nov 2020 at 16:55, VincentCE  wrote:
>
> Hi!
>
> I made the observation that loading data initially into the ignite-cluster
> with native persistence enabled is usually a lot faster than subsequent
> loadings of the same data, that is 30 min (initially) vs 52 min (4th time)
> for 170 GB of data.
>
> Does this indicate bad configurations from our side or is this expected
> behaviour? In fact we are quite happy with the initial loading speed of our
> data but will generally need to overwrite significant parts of it (which is
> the reasons for my question). I already tried to apply all the suggestion
> mentioned in https://apacheignite.readme.io/docs/durable-memory-tuning.
>
> We are using ignite 2.8.1 currently. Here is our data storage configuration:
>
> 
>  class="\"org.apache.ignite.configuration.DataStorageConfiguration\"">
> 
> 
> 
>  value=\#{2 * 1000 * 1000 * 1000}\/>
>  value=\#{10L * 1024 * 1024 * 1024}\/>
>
> 
>  class="\"org.apache.ignite.configuration.DataRegionConfiguration\"">
>  value="\"true\""/>
>  name=\checkpointPageBufferSize\
> value=\#{2L * 1024 * 1024 * 1024}\/>
>  value="\"Default_Region\""/>
>  value="\"${IGNITE_DEFAULT_REGION}\""/>
>  value="\"${IGNITE_DEFAULT_REGION}\""/>
> 
> 
> 
> 
> 
> 
>
> Thanks in advance!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite cache TextQuery on JsonString data Field

2020-11-30 Thread siva
Hi,
I have modified like this but still cursor cacheEntry contains no result.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite cache TextQuery on JsonString data Field

2020-11-30 Thread ibelyakov
Hi,

Seems like you should use "Person" as a type for the TextQuery instead of
"CommonConstruction", here:
/var textquery = new TextQuery(typeof(CommonConstruction), "Mobile");/

Igor




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Native persistence enabled: Loading the same data again takes longer than initially

2020-11-30 Thread VincentCE
Hi!

I made the observation that loading data initially into the ignite-cluster
with native persistence enabled is usually a lot faster than subsequent
loadings of the same data, that is 30 min (initially) vs 52 min (4th time)
for 170 GB of data.

Does this indicate bad configurations from our side or is this expected
behaviour? In fact we are quite happy with the initial loading speed of our
data but will generally need to overwrite significant parts of it (which is
the reasons for my question). I already tried to apply all the suggestion
mentioned in https://apacheignite.readme.io/docs/durable-memory-tuning.

We are using ignite 2.8.1 currently. Here is our data storage configuration:























Thanks in advance!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite cache TextQuery on JsonString data Field

2020-11-30 Thread siva
Hi All,
I am using ApacheIgnite2.7.6 .Net Core.And I have .Net ClientServer App.And
for Cache creation using QueryEntity CacheConfiguration.And for 
Data loading in Ignite cache using DataStreamer AddData(key,model) method.
*
Model class:*



where Payload field contains jsonstring as data.
*for example Payload data*


*

*

*Code:*


cursor cacheEntry contains no result.

How to get data in result and what i need to change?

Thanks.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


SQLRowCount in unixodbc driver

2020-11-30 Thread Wolfgang Meyerle

Hi,

another issue popped up.

SQLRowCount gives me back 0 when I execute the following query:

select * from mytable limit 10;


The column count is correct. The rowcount not as the api is fetching 10
rows


I honestly think what I've seen so far is that the current ODBC driver
implementation in Apache Ignite is more than basic

I will try to amend the things that I need if I'm able to dig myself
through the code and see if I can get to some reasonable point to
understand why


Regards,


Wolfgang


RE: Failing to cluster correctly

2020-11-30 Thread RENDLE, ANDY (Insurance Finance Transformation Portfolio)
Classification: Public

Thanks for the update.

Is there a Spring Ignite example available to demonstrate your suggested 
techniques?

Thanks

Andy Rendle
Hadoop Technical Architect
Insurance Finance Transformation Portfolio | Group Transformation

M: 07973 878454  | E: 
andy.ren...@lloydsbanking.com
A: Lloyds Banking Group, Harbourside, 10 Canons Way, Bristol, BS1 5LF
MM: 0203 770  CP: 79299155# PP: 77395220#

[cid:image001.png@01D327DF.B322F800]
Absence Note :

From: Ilya Kasnacheev 
Sent: 27 November 2020 10:02
To: user@ignite.apache.org
Subject: Re: Failing to cluster correctly

-- This email has reached the Bank via an external source --

Hello!

It seems that you are sending compute tasks from one node to another with 
kafkaEventProcessor field set. However, you can't really send a Kafka instance 
to a different node that way. You need to remove this field / mark as 
transient, and instead inject a local Kafka on remote node before doing 
computations. Maybe remove kafkaEventProcessor with static 
kafkaEventProcessor().

Regards,
--
Ilya Kasnacheev


вт, 24 нояб. 2020 г. в 22:50, RENDLE, ANDY (Insurance Finance Transformation 
Portfolio) 
mailto:andy.ren...@lloydsbanking.com>>:

Classification: Public

All

We have developed a Spring Ignite Kafka producer application, utilising Ignites 
caches and failover capabilities.

This runs perfectly in standalone mode but when configured with another host we 
get many serialisation errors. We have obviously made some fundamental mistake, 
can anyone give us a clue as to where to look?

java-1.8.0-openjdk-1.8.0.222.b10-0.el7_6.x86_64
Ignite v2.7.6 & v2.9.0
spring-boot 2.0.6.RELEASE

Most of our processes are invoked like this:
ignite.compute().withExecutor(SCANNER_POOL).callAsync(IgniteCallable)

It seems to serializing many classes that are not expected, even when both 
nodes have exactly the same deployment. We have many Autowired variables but 
all are correct and working in standalone mode. In clustered mode and we end up 
with a huge exceptions in the file attached:

Many thanks in advance,

Andy Rendle


Lloyds Banking Group plc. Registered Office: The Mound, Edinburgh EH1 1YZ. 
Registered in Scotland no. SC95000. Telephone: 0131 225 4555.

Lloyds Bank plc. Registered Office: 25 Gresham Street, London EC2V 7HN. 
Registered in England and Wales no. 2065. Telephone 0207626 1500.

Bank of Scotland plc. Registered Office: The Mound, Edinburgh EH1 1YZ. 
Registered in Scotland no. SC327000. Telephone: 03457 801 801.

Lloyds Bank Corporate Markets plc. Registered office: 25 Gresham Street, London 
EC2V 7HN. Registered in England and Wales no. 10399850.

Scottish Widows Schroder Personal Wealth Limited. Registered Office: 25 Gresham 
Street, London EC2V 7HN. Registered in England and Wales no. 11722983.

Lloyds Bank plc, Bank of Scotland plc and Lloyds Bank Corporate Markets plc are 
authorised by the Prudential Regulation Authority and regulated by the 
Financial Conduct Authority and Prudential Regulation Authority.

Scottish Widows Schroder Personal Wealth Limited is authorised and regulated by 
the Financial Conduct Authority.

Lloyds Bank Corporate Markets Wertpapierhandelsbank GmbH is a wholly-owned 
subsidiary of Lloyds Bank Corporate Markets plc. Lloyds Bank Corporate Markets 
Wertpapierhandelsbank GmbH has its registered office at Thurn-und-Taxis Platz 
6, 60313 Frankfurt, Germany. The company is registered with the Amtsgericht 
Frankfurt am Main, HRB 111650. Lloyds Bank Corporate Markets 
Wertpapierhandelsbank GmbH is supervised by the Bundesanstalt für 
Finanzdienstleistungsaufsicht.

Halifax is a division of Bank of Scotland plc.

HBOS plc. Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in 
Scotland no. SC218813.


This e-mail (including any attachments) is private and confidential and may 
contain privileged material. If you have received this e-mail in error, please 
notify the sender and delete it (including any attachments) immediately. You 
must not copy, distribute, disclose or use any of the information in it or any 
attachments. Telephone calls may be monitored or recorded.
Lloyds Banking Group plc. Registered Office: The Mound, Edinburgh EH1 1YZ. 
Registered in Scotland no. SC95000. Telephone: 0131 225 4555.

Lloyds Bank plc. Registered Office: 25 Gresham Street, London EC2V 7HN. 
Registered in England and Wales no. 2065. Telephone 0207626 1500.

Bank of Scotland plc. Registered Office: The Mound, Edinburgh EH1 1YZ. 
Registered in Scotland no. SC327000. Telephone: 03457 801 801.

Lloyds Bank Corporate Markets plc. Registered office: 25 Gresham Street, London 
EC2V 7HN. Registered in England and Wales no. 10399850.

Scottish Widows Schroder Personal Wealth Limited. Registered Office: 25 Gresham 
Street, London EC2V 7HN. Registered in England and Wales no. 11722983.

Lloyds Bank plc, Bank of