Re: With as syntax does not work in ignite 2.9.0?

2020-12-09 Thread Ilya Kasnacheev
Hello!

I don't think we can help you further without looking at some SQL
statements. You could probably try commercial support which you trust.

What's the connection string for your driver? Does it point to a server
node or client node?

Regards,
-- 
Ilya Kasnacheev


вт, 8 дек. 2020 г. в 09:17, yangjiajun <1371549...@qq.com>:

> Hello.Thanks for u reply.
>
> Sorry for that I can't post sqls beacuse they contain sensitive bussiness
> info.I'm also sorry that I can't make a reproducer now.
>
> Here is my code to execute query:
> ResultSet excuteQuery(Connection conn, String sql) throws SQLException {
> Statement stmt = conn.createStatement();
> stmt.setQueryTimeout(7200);
> ResultSet rs = stmt.executeQuery(sql);
> rs.setFetchSize(15000);
> return rs;
> }
>
> void closeResultSet(ResultSet rs) throws SQLException {
> rs.close();
> rs.getStatement().close();
> rs.getStatement().getConnection().close();
> }
>
> try(Connection conn = pool.getConnection()){
> try(ResultSet rs = excuteQuery(conn,sql)){
> /**
>  hadle result set
> */
> closeResultSet(rs);
> }
> }
>
> My hikariCP pool settings:
> HikariDataSource ds = new HikariDataSource();
> ds.setDriverClassName("org.apache.ignite.IgniteJdbcThinDriver");
> ds.setMaximumPoolSize(100);
> ds.setLeakDetectionThreshold(60);
> ds.setConnectionTimeout(6);
> ds.setMinimumIdle(0);
> ds.setReadOnly(true);
>
> The reseason I believe with as syntax does not work anymore is I have 5
> sqls
> got in trouble and all of then contain with as syntax.Another reason is
> such
> error happens at parse step.And we can see error info relates to 'temp
> table' which I think with as needs temp table.
>
> But u guys are experts,I will try my best to follow u advices and collect
> more info u need.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: With as syntax does not work in ignite 2.9.0?

2020-12-07 Thread Ilya Kasnacheev
Hello!

Do you happen to run this query on client or server node? Try running it on
a server node for a chance. Clients initialize cached lazily and yield a
crop of NPEs.

Regards,
-- 
Ilya Kasnacheev


пн, 7 дек. 2020 г. в 05:40, yangjiajun <1371549...@qq.com>:

> Hello.
>
> We use 'with xxx as  (select xxx) ',which works vrey fine in 2.8.1 and
> other
> past release versions.After we uprade to 2.9.0,such sqls start to throw
> exception. In the server side,the error looks like:
>
> , args=Object[] [], stmtType=SELECT_STATEMENT_TYPE, autoCommit=true,
> partResReq=false, super=JdbcRequest [type=2, reqId=790418]]]
> class org.apache.ignite.internal.processors.query.IgniteSQLException:
> Failed
> to parse query. General error: "java.lang.NullPointerException" [5-197]
> at
>
> org.apache.ignite.internal.processors.query.h2.H2Connection.prepareStatementNoCache(H2Connection.java:194)
> at
>
> org.apache.ignite.internal.processors.query.h2.H2PooledConnection.prepareStatementNoCache(H2PooledConnection.java:109)
> at
>
> org.apache.ignite.internal.processors.query.h2.QueryParser.parseH2(QueryParser.java:355)
> at
>
> org.apache.ignite.internal.processors.query.h2.QueryParser.parse0(QueryParser.java:222)
> at
>
> org.apache.ignite.internal.processors.query.h2.QueryParser.parse(QueryParser.java:138)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1071)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2779)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2775)
> at
>
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3338)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$2(GridQueryProcessor.java:2795)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2833)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2769)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2727)
> at
>
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:647)
> at
>
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.doHandle(JdbcRequestHandler.java:320)
> at
>
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:257)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:202)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:56)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
>
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
>
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.h2.jdbc.JdbcSQLException: General error:
> "java.lang.NullPointerException" [5-197]
> at
> org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
> at org.h2.message.DbException.get(DbException.java:168)
> at org.h2.message.DbException.convert(DbException.java:307)
> at org.h2.message.DbException.toSQLException(DbException.java:280)
> at org.h2.message.TraceObject.logAndConvert(TraceObject.java:357)
> at
> org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:697)
> at
>
> org.apache.ignite.internal.processors.query.h2.H2Connection.prepareStatementNoCache(H2Connection.java:191)
> ... 26 more
> Caused by: java.lang.NullPointerException

Re: Using different versions of serializable objects

2020-12-07 Thread Ilya Kasnacheev
Hello!

You can try specifying BinaryConfiguration where you would supply a
BinaryTypeConfiguration for this type name, specifying a custom
BinarySerializer. Then you can try supplying BinaryReflectiveSerializer
there.

See
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/BinaryConfiguration.html#setTypeConfigurations-java.util.Collection-

Regards,
-- 
Ilya Kasnacheev


ср, 2 дек. 2020 г. в 19:00, Surkov.Aleksandr :

> no, because we use third party classes and cannot change them
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Using different versions of serializable objects

2020-12-02 Thread Ilya Kasnacheev
Hello!

Does it solve your issue?

Regards,
-- 
Ilya Kasnacheev


ср, 2 дек. 2020 г. в 16:50, Surkov.Aleksandr :

> If I delete only methods (leave the interface), then there are no errors.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Using different versions of serializable objects

2020-12-02 Thread Ilya Kasnacheev
Hello!

Does this still happen if you get rid of Serializable, readObject,
writeObject?

Regards,
-- 
Ilya Kasnacheev


ср, 2 дек. 2020 г. в 09:58, Surkov.Aleksandr :

> Hi igniters!
>
> We use third party classes and store them objects in key-value cache.
> This class implements the Serializable interface and defines the readObject
> and writeObject methods.
>
> After we decided to switch to a new version of the class and tried to read
> from the cache, we received an error:
>
> Exception in thread "main" javax.cache.CacheException: class
> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object with
> optimized marshaller
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1317)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:2066)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1093)
> at
>
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:676)
> at
> com.client.SerializableTest.main(SerializableTest.java:27)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
> unmarshal object with optimized marshaller
> at
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7510)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:191)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4972)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.repairableGet(GridCacheAdapter.java:4931)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1486)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1090)
> ... 2 more
> Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
> unmarshal object with optimized marshaller
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1785)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.unmarshal(BinaryUtils.java:1991)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.unmarshal(BinaryUtils.java:1816)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.unmarshal(BinaryUtils.java:1807)
> at
>
> org.apache.ignite.internal.binary.GridBinaryMarshaller.unmarshal(GridBinaryMarshaller.java:268)
> at
>
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.unmarshal(CacheObjectBinaryProcessorImpl.java:1100)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectImpl.value(CacheObjectImpl.java:89)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:176)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:67)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:136)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1808)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1796)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.setResult(GridPartitionedSingleGetFuture.java:747)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onResult(GridPartitionedSingleGetFuture.java:624)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetResponse(GridDhtCacheAdapter.java:374)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1400(GridDhtAtomicCache.java:141)
> at
>
> org.apache.ignite.internal.processors.cache.distri

Re: Why does TextQuery failed to find result

2020-12-02 Thread Ilya Kasnacheev
Hello!

This answer seems relevant: https://stackoverflow.com/a/65069605/36498

Regards,
-- 
Ilya Kasnacheev


вт, 1 дек. 2020 г. в 20:23, siva :

> Hi All,
> I have .Net ClientServerIgnitePersistenceApp v2.7.6 to create Ignite caches
> and data loading using DataStreamer.And for TextQuery i am using thick
> client here is the cache configuration and Model class details.
> *Model class:*Person model class contains both fields marked as
> QuerySqlField and QueryTextField.
>
>
>
>
> *TextQuery search sample code:*
>
>
> *Search Text:*
> Person Model class Payload property contains json string.
>
>
> It's doesn't returning result.How to get search text result from cache?
> Please any further details needed let me know.
> Thanks.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failing to cluster correctly

2020-12-02 Thread Ilya Kasnacheev
Hello!

I'm not sure. If you're using Spring you can also annotate such fields with
@SpringResource.

Regards,
-- 
Ilya Kasnacheev


пн, 30 нояб. 2020 г. в 13:35, RENDLE, ANDY (Insurance Finance
Transformation Portfolio) :

> Classification: Public
>
>
>
> Thanks for the update.
>
>
>
> Is there a Spring Ignite example available to demonstrate your suggested
> techniques?
>
>
>
> Thanks
>
>
>
> *Andy Rendle*
>
> Hadoop Technical Architect
>
> *Insurance Finance Transformation Portfolio | Group Transformation*
> --
>
> *M: *07973 878454  *| E: andy.ren...@lloydsbanking.com
> *
>
> *A: *Lloyds Banking Group, Harbourside, 10 Canons Way, Bristol, BS1 5LF
>
> *MM: *0203 770  *CP: *79299155# *PP: *77395220#
>
>
>
> *[image: cid:image001.png@01D327DF.B322F800]*
> <http://lbg.intranet.group/lbg_transformation/default.shtm>
>
> *Absence Note* :
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* 27 November 2020 10:02
> *To:* user@ignite.apache.org
> *Subject:* Re: Failing to cluster correctly
>
>
>
>
> *-- This email has reached the Bank via an external source -- *
>
> Hello!
>
>
>
> It seems that you are sending compute tasks from one node to another
> with kafkaEventProcessor field set. However, you can't really send a Kafka
> instance to a different node that way. You need to remove this field / mark
> as transient, and instead inject a local Kafka on remote node before doing
> computations. Maybe remove kafkaEventProcessor with
> static kafkaEventProcessor().
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> вт, 24 нояб. 2020 г. в 22:50, RENDLE, ANDY (Insurance Finance
> Transformation Portfolio) :
>
> Classification: Public
>
>
>
> All
>
>
>
> We have developed a Spring Ignite Kafka producer application, utilising
> Ignites caches and failover capabilities.
>
>
>
> This runs perfectly in standalone mode but when configured with another
> host we get many serialisation errors. We have obviously made some
> fundamental mistake, can anyone give us a clue as to where to look?
>
>
>
> java-1.8.0-openjdk-1.8.0.222.b10-0.el7_6.x86_64
>
> Ignite v2.7.6 & v2.9.0
>
> spring-boot 2.0.6.RELEASE
>
>
>
> Most of our processes are invoked like this:
>
> ignite.compute().withExecutor(SCANNER_POOL).callAsync(IgniteCallable)
>
>
>
> It seems to serializing many classes that are not expected, even when both
> nodes have exactly the same deployment. We have many Autowired variables
> but all are correct and working in standalone mode. In clustered mode and
> we end up with a huge exceptions in the file attached:
>
>
>
> Many thanks in advance,
>
>
>
> *Andy Rendle*
>
>
>
> Lloyds Banking Group plc. Registered Office: The Mound, Edinburgh EH1 1YZ.
> Registered in Scotland no. SC95000. Telephone: 0131 225 4555.
>
> Lloyds Bank plc. Registered Office: 25 Gresham Street, London EC2V 7HN.
> Registered in England and Wales no. 2065. Telephone 0207626 1500.
>
> Bank of Scotland plc. Registered Office: The Mound, Edinburgh EH1 1YZ.
> Registered in Scotland no. SC327000. Telephone: 03457 801 801.
>
> Lloyds Bank Corporate Markets plc. Registered office: 25 Gresham Street,
> London EC2V 7HN. Registered in England and Wales no. 10399850.
>
> Scottish Widows Schroder Personal Wealth Limited. Registered Office: 25
> Gresham Street, London EC2V 7HN. Registered in England and Wales no.
> 11722983.
>
> Lloyds Bank plc, Bank of Scotland plc and Lloyds Bank Corporate Markets
> plc are authorised by the Prudential Regulation Authority and regulated by
> the Financial Conduct Authority and Prudential Regulation Authority.
>
> Scottish Widows Schroder Personal Wealth Limited is authorised and
> regulated by the Financial Conduct Authority.
>
> Lloyds Bank Corporate Markets Wertpapierhandelsbank GmbH is a wholly-owned
> subsidiary of Lloyds Bank Corporate Markets plc. Lloyds Bank Corporate
> Markets Wertpapierhandelsbank GmbH has its registered office at
> Thurn-und-Taxis Platz 6, 60313 Frankfurt, Germany. The company is
> registered with the Amtsgericht Frankfurt am Main, HRB 111650. Lloyds Bank
> Corporate Markets Wertpapierhandelsbank GmbH is supervised by the
> Bundesanstalt für Finanzdienstleistungsaufsicht.
>
> Halifax is a division of Bank of Scotland plc.
>
> HBOS plc. Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in
> Scotland no. SC218813.
>
>
>
> This e-mail (including any attachments) is private and confidential and
> may contain privileged material. If you have received this e-mail in error,
> ple

Re: Unixodbc currently not working...

2020-11-30 Thread Ilya Kasnacheev
Hello!

There may be some issues with ODBC driver but it is generally working and
stable. I'm not sure why you would need the ODBC_V3 specifically?

Regards,
-- 
Ilya Kasnacheev


пн, 30 нояб. 2020 г. в 14:48, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Quite simple. I'd like to execute SQL queries.
>
> As the thin client c++ interface which I'd like to use is not capable in
> executing SQL queries I have to use unixodbc as a temporary workaround.
>
> There are some other issues that popped up in the unixodbc driver from
> Ignite.
>
> Boolean and Double values are currently causing issues.
> Whenever I have a table column storing the value 12.3456 for example I'm
> getting 123456 back by using the interface.
>
> Boolean values are also an issue as the column table data type doesn't
> seem to be defined. I'm getting "-7" back which is definitely wrong ;-)
>
> Regards,
>
> Wolfgang
>
>
> Am 30.11.20 um 10:41 AM schrieb Ilya Kasnacheev:
> > Hello!
> >
> > Maybe the driver is not actually capable of ODBC_V3? Why do you need it?
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > пт, 27 нояб. 2020 г. в 19:15, Wolfgang Meyerle
> > mailto:wolfgang.meye...@googlemail.com
> >>:
> >
> > So,
> >
> > I uploaded a tiny demo project for my two issues:
> >
> > Issue1 states that the odbc interface is reporting it's not capable
> of
> > the ODBC_V3 standard.
> >
> > Issue2 is the one I described where I get linking problems despite
> that
> > the even if you uncomment #LIBS += -lodbcinst in the pro file of the
> QT
> > project.
> >
> > You can find everything here:
> > https://filebin.net/5fclxod62xi36gbb
> > <https://filebin.net/5fclxod62xi36gbb>
> >
> > Regards,
> >
> >     Wolfgang
> >
> > Am 27.11.20 um 4:21 PM schrieb Ilya Kasnacheev:
> >  > Hello!
> >  >
> >  > The workaround for third-party tools is probably
> >  > LD_PRELOAD=/path/to/libodbcinst.so isql -foo -bar
> >  >
> >  > Regards,
> >  > --
> >  > Ilya Kasnacheev
> >  >
> >  >
> >  > пт, 27 нояб. 2020 г. в 18:18, Igor Sapego  > <mailto:isap...@apache.org>
> >  > <mailto:isap...@apache.org <mailto:isap...@apache.org>>>:
> >  >
> >  > Hi,
> >  >
> >  > Starting from your last question, it's Version3.
> >  >
> >  > Now to the issue you are referring to. It definitely looks
> like a
> >  > bug to me. It's weird
> >  > that no one has found it earlier. Looks like no one
> > uses SQLConnect?
> >  > It is weird that
> >  > We do not have a test for that either. Anyway I filed a
> > ticket and
> >  > going to take a look
> >  > at it soon: [1]
> >  >
> >  > As a workaround you can try a solution suggested by Ilya. I
> > can not
> >  > provide a sound
> >  > workaround for third-party tools like isql though.
> >  >
> >  > [1] - https://issues.apache.org/jira/browse/IGNITE-13771
> > <https://issues.apache.org/jira/browse/IGNITE-13771>
> >  > <https://issues.apache.org/jira/browse/IGNITE-13771
> > <https://issues.apache.org/jira/browse/IGNITE-13771>>
> >  >
> >  > Best Regards,
> >  > Igor
> >  >
> >  >
> >  > On Fri, Nov 27, 2020 at 5:43 PM Ilya Kasnacheev
> >  > mailto:ilya.kasnach...@gmail.com>
> > <mailto:ilya.kasnach...@gmail.com
> > <mailto:ilya.kasnach...@gmail.com>>> wrote:
> >  >
> >  > Hello!
> >  >
> >  > You can link your own binary to libodbcinst, in which
> > case the
> >  > linking problem should go away. Can you try that?
> >  >
> >  > Regards,
> >  > --
> >  > Ilya Kasnacheev
> >  >
> >  >
> >  > пт, 27 нояб. 2020 г. в 17:13, Wolfgang Meyerle
> >  >  > <mailto:wolfgang.meye...@googlemail.com>
> >  > <mailto:wolfgang.meye...@googlemail.com
> > <mailto:wolfgang.meye...@googlemail.com>>>:
> > 

Re: Unixodbc currently not working...

2020-11-27 Thread Ilya Kasnacheev
Hello!

The workaround for third-party tools is probably
LD_PRELOAD=/path/to/libodbcinst.so
isql -foo -bar

Regards,
-- 
Ilya Kasnacheev


пт, 27 нояб. 2020 г. в 18:18, Igor Sapego :

> Hi,
>
> Starting from your last question, it's Version3.
>
> Now to the issue you are referring to. It definitely looks like a bug to
> me. It's weird
> that no one has found it earlier. Looks like no one uses SQLConnect? It is
> weird that
> We do not have a test for that either. Anyway I filed a ticket and going
> to take a look
> at it soon: [1]
>
> As a workaround you can try a solution suggested by Ilya. I can not
> provide a sound
> workaround for third-party tools like isql though.
>
> [1] - https://issues.apache.org/jira/browse/IGNITE-13771
>
> Best Regards,
> Igor
>
>
> On Fri, Nov 27, 2020 at 5:43 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> You can link your own binary to libodbcinst, in which case the linking
>> problem should go away. Can you try that?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 27 нояб. 2020 г. в 17:13, Wolfgang Meyerle <
>> wolfgang.meye...@googlemail.com>:
>>
>>> Hi,
>>>
>>> after spending several hours to get the unixodbc driver up and running I
>>> nearly gave up.
>>>
>>> However together with the author of unixodbc I was able to find out that
>>> the current odbc driver in Apache Ignite is not doeing what it's
>>> supposed to do.
>>>
>>> As soon as I execute the command:
>>> et = SQLConnect(dbc, (SQLCHAR*)DSN, SQL_NTS, (SQLCHAR*)"", SQL_NTS,
>>> (SQLCHAR*)"", SQL_NTS);
>>>
>>> I get a crash in my program stating that:
>>> isql: symbol lookup error: /usr/local/lib/libignite-odbc.so: undefined
>>> symbol: SQLGetPrivateProfileString
>>>
>>> According to the author of unixodbc this is a function which is called
>>> to find out where to connect to by opening the /etc/odbc.ini file an
>>> looking for the DSN provided by the parameter.
>>>
>>>
>>> I compiled the Apache Ignite odbc connector exactly as stated in the
>>> manual. However an ldd on the /usr/local/lib/libignite-odbc.so does not
>>> show me a dependency on the odbcinst.so as stated by the author.
>>>
>>> So it seems that the configure script for the compilation is broken
>>> somehow.
>>>
>>> I installed unixodbc-dev on my ubuntu box so that shouldn't be the
>>> problem.
>>>
>>> Digging down into the cmake script it seems that it also correctly
>>> detects the installed unixodbc-dev installation.
>>>
>>> But the dependency to the odbcinst.so is missing.
>>>
>>>
>>> Hopefully someone can help.
>>>
>>> In the meantime I'm using the SQLDriverConnect routine which is not
>>> dependent on the SQLGetPrivateProfileString. That works but it just a
>>> dirty workaround and shouldn't be the final solution.
>>>
>>> Which ODBC Version is implemented in the code?
>>>
>>> Version2 or Version3?
>>>
>>> Reagards,
>>>
>>> Wolfgang
>>>
>>>
>>>
>>>


Re: Unixodbc currently not working...

2020-11-27 Thread Ilya Kasnacheev
Hello!

You can link your own binary to libodbcinst, in which case the linking
problem should go away. Can you try that?

Regards,
-- 
Ilya Kasnacheev


пт, 27 нояб. 2020 г. в 17:13, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Hi,
>
> after spending several hours to get the unixodbc driver up and running I
> nearly gave up.
>
> However together with the author of unixodbc I was able to find out that
> the current odbc driver in Apache Ignite is not doeing what it's
> supposed to do.
>
> As soon as I execute the command:
> et = SQLConnect(dbc, (SQLCHAR*)DSN, SQL_NTS, (SQLCHAR*)"", SQL_NTS,
> (SQLCHAR*)"", SQL_NTS);
>
> I get a crash in my program stating that:
> isql: symbol lookup error: /usr/local/lib/libignite-odbc.so: undefined
> symbol: SQLGetPrivateProfileString
>
> According to the author of unixodbc this is a function which is called
> to find out where to connect to by opening the /etc/odbc.ini file an
> looking for the DSN provided by the parameter.
>
>
> I compiled the Apache Ignite odbc connector exactly as stated in the
> manual. However an ldd on the /usr/local/lib/libignite-odbc.so does not
> show me a dependency on the odbcinst.so as stated by the author.
>
> So it seems that the configure script for the compilation is broken
> somehow.
>
> I installed unixodbc-dev on my ubuntu box so that shouldn't be the problem.
>
> Digging down into the cmake script it seems that it also correctly
> detects the installed unixodbc-dev installation.
>
> But the dependency to the odbcinst.so is missing.
>
>
> Hopefully someone can help.
>
> In the meantime I'm using the SQLDriverConnect routine which is not
> dependent on the SQLGetPrivateProfileString. That works but it just a
> dirty workaround and shouldn't be the final solution.
>
> Which ODBC Version is implemented in the code?
>
> Version2 or Version3?
>
> Reagards,
>
> Wolfgang
>
>
>
>


Re: Ignite persistence: Data of node lost that got excluded from baseline topology

2020-11-27 Thread Ilya Kasnacheev
Hello!

1. I guess it's possible to activate a cluster manually when not all
baseline nodes are in. Later you can join your node back to baseline,
hopefully. Please note that there's still chance that old data will be
deleted from node upon join: We can only re-use data from old partition
when we're confident that it's identical to new data of same partition
(inactive cluster) or when we can apply historical rebalancing (apply
changes from WAL to the same partition).Historical rebalance is usually
only feasible when there's few differences between old and new partition.
If the difference is significant, it's easier to rebalance the whole
partition.

It may seem strange that we throw a lot of data out, but we don't have any
obvious way to tell which of that data is current and which is not. So it's
useless in the end.

2. Yes, it is recommended to always have consistentId in persistent
cluster. Otherwise the node will just assume first available data dir as
its own.

Regards,
-- 
Ilya Kasnacheev


пн, 23 нояб. 2020 г. в 11:17, VincentCE :

> Hi!
>
> in our project we are currently using ignite 2.8.1 without ignite native
> persistence enabled. No we would like to enable this feature to prevent
> data
> loss during node restart.
>
> Important: We use AttributeNodeFilters to separate our data, e.g data of
> type1 only lives in the type1-clustergroup and so on.
>
> I have two question regarding *native persistence*:
>
> 1. After some time we would like to shut down nodes of type1 to save
> resources (but possibly would like to use it in the future again if
> business
> requires it -> we would like the data to remain there safely/persisted).
> However in order to start up the cluster again the type1 nodes need to be
> excluded from baseline topology. But then if later on we would like to
> reuse
> type1 nodes again their data is getting deleted as soon as they are
> rejoining the baseline topology. Is there a way to prevent this?
>
> 2. When using AttributeNodeFilter it seems that we need to use fixed
> consistentIds (e.g. consistentId = "type1-0") since otherwise a node of
> type2 would potentially use the directory of a type1 node in the
> persistence
> storage directory which finally would break the data separation when new
> data is loaded into the caches.
>
> Thanks in advance!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failing to cluster correctly

2020-11-27 Thread Ilya Kasnacheev
Hello!

It seems that you are sending compute tasks from one node to another
with kafkaEventProcessor field set. However, you can't really send a Kafka
instance to a different node that way. You need to remove this field / mark
as transient, and instead inject a local Kafka on remote node before doing
computations. Maybe remove kafkaEventProcessor with
static kafkaEventProcessor().

Regards,
-- 
Ilya Kasnacheev


вт, 24 нояб. 2020 г. в 22:50, RENDLE, ANDY (Insurance Finance
Transformation Portfolio) :

> Classification: Public
>
> All
>
>
>
> We have developed a Spring Ignite Kafka producer application, utilising
> Ignites caches and failover capabilities.
>
>
>
> This runs perfectly in standalone mode but when configured with another
> host we get many serialisation errors. We have obviously made some
> fundamental mistake, can anyone give us a clue as to where to look?
>
>
>
> java-1.8.0-openjdk-1.8.0.222.b10-0.el7_6.x86_64
>
> Ignite v2.7.6 & v2.9.0
>
> spring-boot 2.0.6.RELEASE
>
>
>
> Most of our processes are invoked like this:
>
> ignite.compute().withExecutor(SCANNER_POOL).callAsync(IgniteCallable)
>
>
>
> It seems to serializing many classes that are not expected, even when both
> nodes have exactly the same deployment. We have many Autowired variables
> but all are correct and working in standalone mode. In clustered mode and
> we end up with a huge exceptions in the file attached:
>
>
>
> Many thanks in advance,
>
>
>
> *Andy Rendle*
>
>
>
> Lloyds Banking Group plc. Registered Office: The Mound, Edinburgh EH1 1YZ.
> Registered in Scotland no. SC95000. Telephone: 0131 225 4555.
>
> Lloyds Bank plc. Registered Office: 25 Gresham Street, London EC2V 7HN.
> Registered in England and Wales no. 2065. Telephone 0207626 1500.
>
> Bank of Scotland plc. Registered Office: The Mound, Edinburgh EH1 1YZ.
> Registered in Scotland no. SC327000. Telephone: 03457 801 801.
>
> Lloyds Bank Corporate Markets plc. Registered office: 25 Gresham Street,
> London EC2V 7HN. Registered in England and Wales no. 10399850.
>
> Scottish Widows Schroder Personal Wealth Limited. Registered Office: 25
> Gresham Street, London EC2V 7HN. Registered in England and Wales no.
> 11722983.
>
> Lloyds Bank plc, Bank of Scotland plc and Lloyds Bank Corporate Markets
> plc are authorised by the Prudential Regulation Authority and regulated by
> the Financial Conduct Authority and Prudential Regulation Authority.
>
> Scottish Widows Schroder Personal Wealth Limited is authorised and
> regulated by the Financial Conduct Authority.
>
> Lloyds Bank Corporate Markets Wertpapierhandelsbank GmbH is a wholly-owned
> subsidiary of Lloyds Bank Corporate Markets plc. Lloyds Bank Corporate
> Markets Wertpapierhandelsbank GmbH has its registered office at
> Thurn-und-Taxis Platz 6, 60313 Frankfurt, Germany. The company is
> registered with the Amtsgericht Frankfurt am Main, HRB 111650. Lloyds Bank
> Corporate Markets Wertpapierhandelsbank GmbH is supervised by the
> Bundesanstalt für Finanzdienstleistungsaufsicht.
>
> Halifax is a division of Bank of Scotland plc.
>
> HBOS plc. Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in
> Scotland no. SC218813.
>
> This e-mail (including any attachments) is private and confidential and
> may contain privileged material. If you have received this e-mail in error,
> please notify the sender and delete it (including any attachments)
> immediately. You must not copy, distribute, disclose or use any of the
> information in it or any attachments. Telephone calls may be monitored or
> recorded.
>


Re: cpp unixodbc connection

2020-11-27 Thread Ilya Kasnacheev
Hello!

Can you please provide a complete reproducer, with main() and all that
stuff?

You can check out the modules/platforms/cpp/odbc-test for working examples.

Regards,
-- 
Ilya Kasnacheev


чт, 26 нояб. 2020 г. в 21:48, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> The code I'm using at the moment is here...
> https://filebin.ca/5ihkjcOx8nid
>
>
> Am 26.11.20 um 7:41 PM schrieb Wolfgang Meyerle:
> > Hi,
> >
> > I'm now trying to connect to apache ignite via the odbc driver the whole
> > afternoon and now I'm stuck.
> >
> > The compilation and the installation in the platform/cpp directory went
> > fine. Unixodbc driver was successfully installed according to the
> readmes.
> >
> > I also double checked dependency libraries of the ignite odbc so files
> > and everything seems to be ok so far.
> >
> > When I run isql on the command line I cannot connect to Apache Ignite.
> >
> > If I use iusql on the command line with the DSN configured in
> > /etc/odbc.ini I can perform a connection and can access Ignites database.
> >
> > However the cpp code does not work and I have no clue neither the error
> > message is providing useful hints.
> >
> > According to the manual the ignite driver on linux is using unicode and
> > is not supporting ansii or am I wrong?
> >
> > Can anybody provide me a short cpp example how to setup a connection
> > with cpp using the ODBC driver?
> >
> > Regards,
> >
> > Wolfgang
>


Re: Crashes when running Apache Ignite as a sever node together with cpp code

2020-11-26 Thread Ilya Kasnacheev
Hello!

I think there should be some kind of guide on how to debug JVM apps. I
would start with adding -Xint jvm arg to avoid JIT, which should make work
easier for both debugger and the VM.

Regards,
-- 
Ilya Kasnacheev


ср, 25 нояб. 2020 г. в 09:11, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Hi,
>
> I tried to run apache ignite as given by example from the website as a
> server node attached to my cpp program.
>
> Basically it works fine but debugging is not possible anymore which is a
> show stopper. The application crashes everytime when the program is
> debugged in QT.
>
> Any suggestions?
>
> Regards,
>
> Wolfgang
>


Re: Thin Client connection not working...

2020-11-25 Thread Ilya Kasnacheev
Hello!

I'm pretty sure I had an in-memory node active at the same time.

Regards,
-- 
Ilya Kasnacheev


ср, 25 нояб. 2020 г. в 09:05, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Seems like you're missing starting up youre ignite server node.
>
> try this:
>
> ./ignite.sh
>
> after the node has started activate the node with
>
> ./control.sh --activate
>
>
> I ran into the same issue...
>
>
> Regards,
>
> Wolfgang
>
> Am 24.11.20 um 4:41 PM schrieb Ilya Kasnacheev:
> >
> >  >
> 
> >  > *Von:* Stephen Darlington  > <mailto:stephen.darling...@gridgain.com>>
>


Re: Thin Client connection not working...

2020-11-24 Thread Ilya Kasnacheev
Hello!

I have just tried compiling and running this program, and it ran without
any errors (Ubuntu 20.02)

Regards,
-- 
Ilya Kasnacheev


вт, 17 нояб. 2020 г. в 22:28, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Hi,
>
> sorry for the late reply but it was getting too late yesterday evening...
>
> Below you can find the code from the Apache Ignite Website that I tried
> without successfully getting a connection to the Ignite Server Node...
>
> Just to mention. I had both (the thin client and the server) running on
> the same machine.
> According to the xml file (which I can also attach if asked) I
> configured the port to 10800.
>
> nc localhost 10800 works so I assume the server is listening and the
> problem is the client code from the Apache Ignite Website or my
> configuration...
>
>
>
> #include 
> #include 
>
> using namespace ignite::thin;
>
> void TestClient()
> {
>  IgniteClientConfiguration cfg;
>
>  //Endpoints list format is "[port[..range]][,...]"
>  cfg.SetEndPoints("localhost:10800");
>
>
>  IgniteClient client = IgniteClient::Start(cfg);
>
>  cache::CacheClient cacheClient =
>  client.GetOrCreateCache("TestCache");
>
>  cacheClient.Put(42, "Hello Ignite Thin Client!");
> }
>
> int main(int argc, char** argv) {
>  TestClient();
> }
>
>
> terminate called after throwing an instance of 'ignite::IgniteError'
>what():  Failed to establish connection with any host.
> 22:18:50: The program has unexpectedly finished.
>
>
>
> Am 16.11.20 um 1:50 PM schrieb wolfgang.meye...@googlemail.com:
> > I can Post an detailed Error mag tonight
> >
> > Sent from Nine <http://www.9folders.com/>
> > 
> > *Von:* Stephen Darlington 
> > *Gesendet:* Montag, 16. November 2020 12:10
> > *An:* user
> > *Betreff:* Re: Thin Client connection not working...
> >
> > Doesn’t work how? Doesn’t compile? Doesn’t connect? Doesn’t create the
> > cache? Is there an error?
> >
> >  > On 16 Nov 2020, at 09:57, Wolfgang Meyerle
> >  wrote:
> >  >
> >  > Hi,
> >  >
> >  > I tried using the cpp thin client example from the Apache ignite site
> > to create a small thin client connection example to one of the running
> > Apache Ignite cluster nodes.
> >  >
> >  > However it doesn't work and I'm out of a clue.
> >  >
> >  > I added the following bean to my persistence configuration file:
> >  >
> >  >  > id="ignite.cfg">
> >  >
> >  > > class="org.apache.ignite.configuration.ClientConnectorConfiguration">
> >  >
> >  >
> >  >
> >  > 
> >  >
> >  >
> >  >
> >  > I restarted the cluster node without any issues.
> >  > A netcat localhost 10800 is able to start a connection to the node so
> > I assume the problem is on the cpp code side.
> >  >
> >  > I used the code sample from the website, but modified the port:
> >  >
> >  > #include 
> >  > #include 
> >  >
> >  > using namespace ignite::thin;
> >  >
> >  > int main(int argc, char**argv)
> >  > {
> >  >IgniteClientConfiguration cfg;
> >  >
> >  >//Endpoints list format is "[port[..range]][,...]"
> >  >cfg.SetEndPoints("127.0.0.1:10800");
> >  >
> >  >IgniteClient client = IgniteClient::Start(cfg);
> >  >
> >  >cache::CacheClient cacheClient =
> >  >client.GetOrCreateCache("TestCache");
> >  >
> >  >cacheClient.Put(42, "Hello Ignite Thin Client!");
> >  >
> >  >return 0;
> >  > }
> >  >
> >  >
> >  > So what's wrong here?
> >  >
> >  >
> >  > Regards,
> >  >
> >  > Wolfgang
> >
> >
>


Re: Out-of-memory issue on single node cache with durable persistence

2020-11-24 Thread Ilya Kasnacheev
Hello!

It seems that you have run out of available memory. I.e., your operating
system could not allocate more memory even though the demand was still in
the range permitted by data region configuration. How much RAM do you have
on that machine?

That you still have heap left is irrelevant here, since the allocation is
for non-heap memory.

Regards,
-- 
Ilya Kasnacheev


пн, 23 нояб. 2020 г. в 21:44, Scott Prater :

> Hello,
>
> I recently ran into an out-of-memory error on a durable persistent cache I
> set up a few weeks ago.  I have a single node, with durable persistence
> enabled, as well as WAL archiving.  I'm running Ignite ver.
> 2.8.1#20200521-sha1:86422096.
>
> I looked at the stack trace, but I couldn't get a clear fix on what part
> of the system ran out of memory, or what parameters I should change to fix
> the problem.  From what I could tell of the stack dump, it looks like the
> WAL archive ran out of memory;  but the memory usage report that occurred
> just a minute before the exception showed plenty of memory was available.
>
> Can someone with more experience tuning Ignite memory point me towards the
> configuration parameters I should adjust?  Below are my log and my
> configuration.  ( I have read the wiki page on memory tuning, but I'm happy
> to be referred back to it.)
>
> The log, with the metrics right before the OOM exception, then the OOM
> exception:
>
> [2020-11-22T19:20:39,787][INFO ][grid-timeout-worker-#22][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=2845fe3e, uptime=5 days, 15:08:38.033]
> ^-- Cluster [hosts=1, CPUs=4, servers=1, clients=0, topVer=1,
> minorTopVer=1]
> ^-- Network [addrs=[0:0:0:0:0:0:0:1%lo, xxx.xxx.xxx.xxx, 127.0.0.1,
> yyy.yyy.yyy.yyy], discoPort=47500, commPort=47100]
> ^-- CPU [CPUs=4, curLoad=0.33%, avgLoad=0.29%, GC=0%]
> ^-- Heap [used=316MB, free=62.34%, comm=812MB]
> ^-- Off-heap memory [used=4288MB, free=33.45%, allocated=6344MB]
> ^-- Page memory [pages=1085139]
> ^--   sysMemPlc region [type=internal, persistence=true,
> lazyAlloc=false,
>   ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.99%,
> allocRam=100MB, allocTotal=0MB]
> ^--   default_region region [type=default, persistence=true,
> lazyAlloc=true,
>   ...  initCfg=256MB, maxCfg=6144MB, usedRam=4288MB, freeRam=30.2%,
> allocRam=6144MB, allocTotal=4240MB]
> ^--   metastoreMemPlc region [type=internal, persistence=true,
> lazyAlloc=false,
>   ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.94%,
> allocRam=0MB, allocTotal=0MB]
> ^--   TxLog region [type=internal, persistence=true, lazyAlloc=false,
>   ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
> allocRam=100MB, allocTotal=0MB]
> ^-- Ignite persistence [used=4240MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> [2020-11-22T19:21:15,585][ERROR][db-checkpoint-thread-#63][] Critical
> system error detected. Will be handled accordingly to configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=CRITICAL_ERROR,
> err=java.lang.OutOfMemoryError]]
> java.lang.OutOfMemoryError: null
> at sun.misc.Unsafe.allocateMemory(Native Method) ~[?:1.8.0_121]
> at
> org.apache.ignite.internal.util.GridUnsafe.allocateMemory(GridUnsafe.java:1205)
> ~[ignite-core-2.9.0.jar:2.9.0]
> at
> org.apache.ignite.internal.util.GridUnsafe.allocateBuffer(GridUnsafe.java:264)
> ~[ignite-core-2.9.0.jar:2.9.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferExpander.(ByteBufferExpander.java:36)
> ~[ignite-core-2.9.0.jar:2.9.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator.(AbstractWalRecordsIterator.java:125)
> ~[ignite-core-2.9.0.jar:2.9.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$RecordsIterator.(FileWriteAheadLogManager.java:2701)
> ~[ignite-core-2.9.0.jar:2.9.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$RecordsIterator.(FileWriteAheadLogManager.java:2637)
> ~[ignite-core-2.9.0.jar:2.9.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.replay(FileWriteAheadLogManager.java:944)
> ~[ignite-core-2.9.0.jar:2.9.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWr

Re: [2.8.1]Checking optimistic transaction state on remote nodes

2020-11-23 Thread Ilya Kasnacheev
Hello!

You can set concurrency mode and isolation for transactions by default by
specifying them in TransactionConfiguration. Otherwise you are correct.

Regards,
-- 
Ilya Kasnacheev


пн, 23 нояб. 2020 г. в 14:49, 38797715 <38797...@qq.com>:

> Hi Ilya,
>
> Then confirm again that according to the log message, optimistic
> transaction and READ_COMMITTED are used for single data operation of
> transactional cache?
>
> If transactions are explicitly turned on, the default concurrency model
> and isolation level are pessimistic and REPEATABLE_READ?
> 在 2020/11/20 下午7:50, Ilya Kasnacheev 写道:
>
> Hello!
>
> It will happen when the node has left but the transaction has to be
> committed.
>
> Most operations on transactional cache will involve implicit transactions
> so there you go.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 19 нояб. 2020 г. в 16:46, 38797715 <38797...@qq.com>:
>
>> Hi community,
>>
>> Although there is a transactional cache, no transaction operation is
>> performed, but there is a lot of output below in the log. Why?
>>
>> [2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager]
>> Checking optimistic transaction state on remote nodes [tx=GridDhtTxLocal
>> [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
>> nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
>> nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
>> nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
>> nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
>> [nearOnOriginatingNode=false, nearNodes=KeySetView [],
>> dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
>> 3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
>> super=IgniteTxLocalAdapter [completedBase=null,
>> sndTransformedVals=false, depEnabled=false,
>> txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
>> useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
>> [topVer=216485010, order=1607062856849, nodeOrder=1],
>> writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
>> nodeOrder=1], implicit=true, loc=true, threadId=24070,
>> startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
>> startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
>> nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
>> concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
>> plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
>> state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
>> [topVer=117, minorTopVer=0], mvccSnapshot=null, skipCompletedVers=false,
>> parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
>> fut=GridCacheTxRecoveryFuture [trackable=true,
>> futId=81c3b7af571-1093b7fe-20ae-4c3f-9adb-4ecac23c136e,
>> tx=GridDhtTxLocal [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
>> nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
>> nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
>> nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
>> nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
>> [nearOnOriginatingNode=false, nearNodes=KeySetView [],
>> dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
>> 3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
>> super=IgniteTxLocalAdapter [completedBase=null,
>> sndTransformedVals=false, depEnabled=false,
>> txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
>> useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
>> [topVer=216485010, order=1607062856849, nodeOrder=1],
>> writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
>> nodeOrder=1], implicit=true, loc=true, threadId=24070,
>> startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
>> startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
>> nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
>> concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
>> plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
>> state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
>> [topVer=117, minorTopVer=0], mvccSnapshot=null, skipCompletedVers=false,
>> parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
>> failedNodeIds=SingletonSet [a7eded9b-4078-4ee5-a1dd-426b8debc203],
>> nearTxCheck=false, innerFuts=EmptyList [],
>> super=GridCompoundIdentityFuture [super=GridCompoundFuture [rdc=Bool
>> reducer: true, initFlag=0, lsnrCalls=0, done=false, cancelled=false,
>> err=null, futs=EmptyList []
>> [2020-

Re: IgniteSecurity vs GridSecurityProcessor

2020-11-23 Thread Ilya Kasnacheev
Hello!

Please refer to this specific ticket:
https://issues.apache.org/jira/browse/IGNITE-9560

As well as this Javadoc of the new class:

/**
 * Ignite Security Processor.
 * 
 * The differences between {@code IgniteSecurity} and {@code
GridSecurityProcessor} are:
 * 
 * {@code IgniteSecurity} allows to define a current security context by
 * {@link #withContext(SecurityContext)} or {@link #withContext(UUID)} methods.
 * {@code IgniteSecurity} doesn't require to pass {@code
SecurityContext} to authorize operations.
 * {@code IgniteSecurity} doesn't extend {@code GridProcessor} interface
 * sequentially it doesn't have any methods of the lifecycle of {@code
GridProcessor}.
 * 
 */


Regards,
-- 
Ilya Kasnacheev


пт, 20 нояб. 2020 г. в 19:26, Vishwas Bm :

> Hi,
>
> We were using 2.7.6 and had implemented a custom security plugin for
> authorization and authentication by implementing GridSecurityProcessor.
>
> Now in 2.9 we see that a new interface is provided IgniteSecurity.
> May I know what is the difference between the interfaces, as both look
> similar and what is appropriate place to implement them.
>
> Also in 2.7.6 there was a class called SecurityContextHolder to  hold the
> context.
> Now in 2.9 we do not see that class and we see a class
> OperartionClassContext.
> How do we use this new class when using a custom security plugin?
>
>
>
> Regards,
> Vishwas
>


Re: [2.8.1]Checking optimistic transaction state on remote nodes

2020-11-20 Thread Ilya Kasnacheev
Hello!

It will happen when the node has left but the transaction has to be
committed.

Most operations on transactional cache will involve implicit transactions
so there you go.

Regards,
-- 
Ilya Kasnacheev


чт, 19 нояб. 2020 г. в 16:46, 38797715 <38797...@qq.com>:

> Hi community,
>
> Although there is a transactional cache, no transaction operation is
> performed, but there is a lot of output below in the log. Why?
>
> [2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager]
> Checking optimistic transaction state on remote nodes [tx=GridDhtTxLocal
> [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
> nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
> nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
> nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
> nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
> [nearOnOriginatingNode=false, nearNodes=KeySetView [],
> dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
> 3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
> super=IgniteTxLocalAdapter [completedBase=null,
> sndTransformedVals=false, depEnabled=false,
> txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
> useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
> [topVer=216485010, order=1607062856849, nodeOrder=1],
> writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
> nodeOrder=1], implicit=true, loc=true, threadId=24070,
> startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
> startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
> nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
> concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
> plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
> state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
> [topVer=117, minorTopVer=0], mvccSnapshot=null, skipCompletedVers=false,
> parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
> fut=GridCacheTxRecoveryFuture [trackable=true,
> futId=81c3b7af571-1093b7fe-20ae-4c3f-9adb-4ecac23c136e,
> tx=GridDhtTxLocal [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
> nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
> nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
> nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
> nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
> [nearOnOriginatingNode=false, nearNodes=KeySetView [],
> dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
> 3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
> super=IgniteTxLocalAdapter [completedBase=null,
> sndTransformedVals=false, depEnabled=false,
> txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
> useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
> [topVer=216485010, order=1607062856849, nodeOrder=1],
> writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
> nodeOrder=1], implicit=true, loc=true, threadId=24070,
> startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
> startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
> nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
> concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
> plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
> state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
> [topVer=117, minorTopVer=0], mvccSnapshot=null, skipCompletedVers=false,
> parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
> failedNodeIds=SingletonSet [a7eded9b-4078-4ee5-a1dd-426b8debc203],
> nearTxCheck=false, innerFuts=EmptyList [],
> super=GridCompoundIdentityFuture [super=GridCompoundFuture [rdc=Bool
> reducer: true, initFlag=0, lsnrCalls=0, done=false, cancelled=false,
> err=null, futs=EmptyList []
> [2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager]
> Finishing prepared transaction [commit=true, tx=GridDhtTxLocal
> [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
> nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
> nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
> nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
> nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
> [nearOnOriginatingNode=false, nearNodes=KeySetView [],
> dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
> 3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
> super=IgniteTxLocalAdapter [completedBase=null,
> sndTransformedVals=false, depEnabled=false,
> txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
> useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
> [topVer=216485010, order=1607062856849, nodeOrder=1],
> writeV

Re: Configuration Files and storage location

2020-11-20 Thread Ilya Kasnacheev
Hello!

Too many unrelated issues are wrapped together here.

First, you do not need to have identical config for all nodes in the
cluster. Paths may surely differ. Some property need to be the same, some
don't.

Second, you may change the node's consistentId to specify which persistence
files to use, even if their config is otherwise identical. If consistentId
is not used then one node may capture the data dir and the second one will
not be able to use it / create one from scratch.

Last, nodes may be added to cluster at any time but you should make sure
that baseline topology is fixed also. Node is not added/removed to baseline
topology unless you do that manually. You may also configure auto-adjust.

Regards,
-- 
Ilya Kasnacheev


пт, 20 нояб. 2020 г. в 13:08, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Hi community,
>
> I have a question regarding the xml configuration which is used in
> clients / servers and thinclients.
>
> Is it really necessary that each node is using the same configuration?
> I'm just asking because I'm trying out to setup multiple servers in one
> machine which totally is nonsense in a production release but worth
> working in a debugging environment and to find out how Ignite acts in
> certain use cases.
>
> I started basically two server configurations. One from a cpp code
> environment and the other from the usual ./ignite.sh bash script which
> has been provided with the release.
>
> The ignite environment was started first and somehow seems to block the
> cpp code from inserting data into the persistent storage as I cannot see
> my log output but just that the node has started up.
>
> After I started the ignite.sh script I activated the server node with
> control.sh --activate
>
> Is that maybe the problem as new nodes are not accepted anymore to be
> joined to the topology?
>
> Why do I have to activate the nodes or deactivate them anyway in the
> first place. In my mind the servers should be shutdown when the last
> node leaves the topology
>
> Regards,
>
> Wolfgang
>


Re: Ignite Visor cmd cannot connect...

2020-11-20 Thread Ilya Kasnacheev
Hello!

I think that C++ node would enforce some binary configuration, so you will
have to repeat the exact one in the XML file for Visor CMD:
https://ignite.apache.org/docs/latest/cpp-specific/cpp-platform-interoperability

I recommend using control.sh or GGCC for Apache Ignite in place of Visor
CMD where possible, since it's not actively maintained.

Regards,
-- 
Ilya Kasnacheev


пт, 20 нояб. 2020 г. в 14:08, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Hi,
>
> I tried using
>
> ignitevisorcmd.sh -cfg=
>
> however I'm getting the following error message:
> Local node's binary configuration is not equal to remote node's binary
> configuration
>
>
> It further states:
> localBinaryCfg=null
> rmtBinaryCfg={globIdMapper ...
>
> I'm using the exact same xml configuration file.
>
> However for some stupid reason the visorcmd cannot connect
> I'm using cpp code which is instantiating the server node.
> The node seems to be running as I can use a thin client to connect to it
> and put and get values...
>
> Why?
>
> Regards,
>
> Wolfgang
>


Re: Javamelody starting an Apache Ignite server node!

2020-11-18 Thread Ilya Kasnacheev
Hello!

Have you seen https://github.com/javamelody/javamelody/issues/858 ?

Regards,
-- 
Ilya Kasnacheev


ср, 18 нояб. 2020 г. в 22:14, David Tinker :

> https://github.com/javamelody/javamelody/issues/962
>
> "I am using the Apache Ignite thin client (
> https://ignite.apache.org/docs/latest/thin-clients/java-thin-client).
> Somehow after my app has been running for a few minutes an Ignite server
> node suddenly starts up! This happens on a "javamelody" thread."
>
> Any ideas?
>
> This is one of the more WTF things I have seen lately! :)
>


Re: Native persistence - No space left on device error

2020-11-18 Thread Ilya Kasnacheev
Hello!

When you run out of disk space, the Ignite node will stop functioning
normally, and your underlying PDS storage may become corrupted as well.

Thus it is recommended to avoid running out of disk space. This is not in
any means a normal mode of operation of Apache Ignite.

Regards,
-- 
Ilya Kasnacheev


ср, 18 нояб. 2020 г. в 18:29, facundo.maldonado :

> What is the expected behavior of native persistence when there is no more
> space on disk?
>
> I'm getting this error when I reach the max size of the mounted volume
> (storage, not wal):
> *class org.apache.ignite.IgniteCheckedException: No space left on device*
>
> I assumed, that when there is no more space for allocating pages, it
> removes
> the older ones. I may be wrong, confusing this with expiry.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: not in, not exists sql result is not correct

2020-11-18 Thread Ilya Kasnacheev
Hello!

This looks like a query where the result is only correct if the data in
orders and accounts are collocated.

If they are not, you may check it by setting distributedJoins=true
connection property and re-running the query.

Please see
https://ignite.apache.org/docs/latest/data-modeling/affinity-collocation

Regards,
-- 
Ilya Kasnacheev


ср, 18 нояб. 2020 г. в 06:03, marble.zh...@coinflex.com <
marble.zh...@coinflex.com>:

> Hi,
>
> We are using ignite 2.8.1, with this sql statement, SELECT * FROM orders p
> WHERE  NOT exists (SELECT accountid FROM account b WHERE b.accountid =
> p.accountid);
>
> Retured result accountid actually exists in table account, change to NOT IN
> also is not correct. But I have another table result is correct.
>
> Is there any suggestions? thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Caches, Regions, Expiry, Eviction

2020-11-16 Thread Ilya Kasnacheev
Hello!

1. There's no such thing as pure on-heap cache. On-heap is an extra option
for a cache, which is always off-heap. So you need to set both.
2. Eviction policy is applicable to on-heap part of a cache only. Expiry
policy will remove data from both off-heap and PDS. There's also page
eviction which changes its meaning when you add persistence.
3. Yes, but it will not remove data from off-heap so it will still be in
the cache.

Regards,
-- 
Ilya Kasnacheev


пн, 16 нояб. 2020 г. в 15:55, narges saleh :

> Hi All,
>
> I need confirmation for my understanding of some basic concepts.
> These are my understanding. Please confirm.
> 1) Regions are not applicable to on heap caches. I'd use JVM -Xms and
> -Xmx to set the limits, while with off heap caches, I'd use regions  with
> initial/max size set.
> 2) When I define regions with persistence enabled, I'd define
> expiryPolicy, not evictionPolicy.
> 3) With on heap caches, I'd define the evictionPolicy directly applied to
> the cache.
>
> thanks
>
>
>


Re: partition-exchanger system-critical thread blocked

2020-11-16 Thread Ilya Kasnacheev
Hello!

For the first 4 thread dump, the problem was in establishing communication
connection to one of nodes:

[2020-11-09T02:31:41,105][WARN
][tcp-comm-worker-#1%EDIFCustomerCC%][TcpCommunicationSpi] Connect timed
out (consider increasing 'failureDetectionTimeout' configuration property)
[addr=/kub4.101:47100, failureDetectionTimeout=6]
[2020-11-09T02:31:41,105][WARN
][tcp-comm-worker-#1%EDIFCustomerCC%][TcpCommunicationSpi] Failed to
connect to a remote node (make sure that destination node is alive and
operating system firewall is disabled on local and remote hosts)
[addrs=[/kub4.101:47100, /127.0.0.1:47100]]


For the last 2, it's not obvious what happens there. This may be an
occurrence of https://issues.apache.org/jira/browse/IGNITE-13540

Regards.
-- 
Ilya Kasnacheev


пт, 13 нояб. 2020 г. в 23:06, Gangaiah Gundeboina :

> Hi Ilya,
>
> Please find attached entire log file.
>
> Regards,
> Gangaiah
>
> 18.zip <http://apache-ignite-users.70518.x6.nabble.com/file/t2396/18.zip>
>
>
>
>
> -
> Thanks and Regards,
> Gangaiah
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Adding fields without adding to sql table DDL

2020-11-16 Thread Ilya Kasnacheev
Hello!

I have no idea and I think it depends on the StreamReceived/allowOverwrite.
Please try it and see.

Regards,
-- 
Ilya Kasnacheev


пт, 13 нояб. 2020 г. в 18:06, ssansoy :

> Last question would this work with datastreamer? e.g. adding a field
> inside the transformation?
> Thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Adding fields without adding to sql table DDL

2020-11-13 Thread Ilya Kasnacheev
Hello!

No, it would not appear in select and in table schema.

The only way of adding a field to table schema is by invoking ALTER TABLE
ADD COLUMN.

Regards,
-- 
Ilya Kasnacheev


пт, 13 нояб. 2020 г. в 17:37, ssansoy :

> Thanks for the tip!
> Is there documentation anywhere about how this would appear in a select?
> would the new field be added to the table schema as well? (we don't want
> this)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Adding fields without adding to sql table DDL

2020-11-13 Thread Ilya Kasnacheev
Hello!

I think you have just discovered that you can't add fields inside an entry
processor. Try using a regular put for this. Then, you can also use these
fields inside entry processor.

Regards,
-- 
Ilya Kasnacheev


пт, 13 нояб. 2020 г. в 17:21, ssansoy :

> Hi, we define our caches via a create table statement, and specify our
> various columns/fields.
>
> We would like to add some additional fields, that are not exposed as part
> of
> the DDL, so not visible in a select statement. Is this possible?
>
> If I try and get a BinaryObjectBuilder for my type, and add a field using
> setField, this doesn't seem to work. I do this outside of a transaction,
> and
> then invoke an entry processor which sets a value for this field on an
> existing entry in the cache. This gives me the following exception:
>
> org.apache.ignite.internal.UnregisteredBinaryTypeException: Attempted to
> update binary metadata inside a critical synchronization block (will be
> automatically retried).
>
> Is there any way around this? I have a need to store information for each
> field about what the previous value of the field was (e.g. each column
> specified in my create table statement needs to have a duplicate column
> called previous_). I'd rather not add this to the sql table as
> it would make selects very confusing to the end users as this meta data is
> only required for internal processing.
>
> Thanks!
> Sham
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Possible memory leak in Ignite 2.9

2020-11-13 Thread Ilya Kasnacheev
Hello!

I think this is expected. Ignite needs to convert an entry (800G array)
into packets to send to other nodes or put to persistence, where it will
allocate another array (800G more).

I don't think you can do anything here, short of chunking your data.

Regards,
-- 
Ilya Kasnacheev


пт, 13 нояб. 2020 г. в 17:04, Kalin Katev :

> Hi,
>
> I am sorry for responding so late. I can only send screenshots of the heap
> dump as seen in visualvm. I took screenshots of 2 different byte arrays,
> one is 800mb large, the other 1gb. Every cached value in ignite (which
> itself is 800mb) creates a tuple of these arrays. I hope this helps, or you
> can maybe give me an idea which references are crucial, as there are too
> many for me to investigate.
>
> Thank you!
>
> Kalin Katev
> Resonance GmbH
>
> Am Di., 3. Nov. 2020 um 15:56 Uhr schrieb Ilya Kasnacheev <
> ilya.kasnach...@gmail.com>:
>
>> Hello!
>>
>> 800MB entry is far above of the entry size that we ever expected to see.
>> Even brief holding of these entries on heap will cause problems for you, as
>> well as sending them over communication.
>>
>> I recommend splitting entries into chunks, maybe. That's what IGFS did
>> basically, we decided to ax it, but you can still use that approach.
>>
>> Having said that, if you can check the heap dump to see where are these
>> stuck byte arrays referenced from, I may check it.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 3 нояб. 2020 г. в 11:48, Kalin Katev :
>>
>>> Hi,
>>>
>>> not too long ago I tested apache ignite for my use case on Open JDK 11.
>>> The use case consists of writing cache entries with values going up to
>>> 800MB in size, the data itself being a simple string. After writing 5
>>> caches entries, 800 MB each, I noticed my Heap space exploding up to 11GB,
>>> while the entries themselves were written off-heap. The histogram of the
>>> heap dump shows that there are 5 tuples of byte[] arrays with size 800MB
>>> and 1000MB that are left dangling on heap.  I am very curious if I did
>>> something wrong or if there indeed is an issue in Ignite. All details can
>>> be seen on
>>> https://stackoverflow.com/questions/64550479/possible-memory-leak-in-apache-ignite
>>>
>>> Should I create a jira ticket for this issue?
>>>
>>> Best regards,
>>> Kalin Katev
>>>
>>> Resonance GmbH
>>>
>>


Re: SQL and Key Value usage in C++

2020-11-13 Thread Ilya Kasnacheev
Hello!

The key_type and value_type are names of classes which may be put and get
in this cache once you configure serialization properly.

You can use INSERT/Put and SELECT/Get in any combinations.

Please see
https://ignite.apache.org/docs/latest/cpp-specific/cpp-serialization as an
example. You also need to switch to the simple name mapper for platform
interoperability (this is done in ignite.xml).

Regards,
-- 
Ilya Kasnacheev


пт, 13 нояб. 2020 г. в 15:57, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Hi,
>
> I have a question where I'm currently struggling to find the answer in
> the Ignite Documentation and hopefully somebody of you can guide me in
> the right direction.
>
> According to the Ignite DDL Documentation of SQL I'm able upon Table
> creation to provide a cache name and I have a keyname and valuename
> property which I can set with for example:
>
> CREATE TABLE Test (a double, b double, c double, res boolean, primary
> key (a,b,c)) with "CACHE_NAME=Test, Key_type=FOO, Value_type=bar";
>
> Can somebody example me for what purpose key_type and value_Type are
> used for?
>
> My intention is to use in code the key value cache principle in c++ and
> later on for complex queries standard sql.
>
> My hope is that the cpp key value based approach looks on the one hand
> prettier in code than daft long sql statements and performs better.
>
> Is anybody having experience in this matter?
>
> How can I enable sql accessible table in Ignite by using the cpp api
> without SQL statements?
>
> How would I use this in code. Are there some samples available that I
> can study?
>
> Reagards,
>
> Wolfgang
>


Re: Proper shutdown from C++ environment

2020-11-13 Thread Ilya Kasnacheev
Hello!

I assume that JVM will install handlers of its own, which will trigger Java
shutdown procedures and Ignite node will be stopped.

Regards,
-- 
Ilya Kasnacheev


пт, 13 нояб. 2020 г. в 16:42, Lieuwe :

> You would only get SIGINT if you run your application in a shell and ctrl-c
> it right? Can you not 'wait' on a key press and stop ignite properly
> instead?
>
> I looked at the same issue recently and I wouldn't expect ignite to do
> anything that isn't documented - even if it does it now, it may not do it
> in
> the next release. I run mine using systemctl and a signal handler that
> does
>
> ignite::Ignition::StopAll(false);
>
> works well.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteC++ throughput

2020-11-13 Thread Ilya Kasnacheev
Hello!

Can you please provide all rows of "EXPLAIN SELECT ... WHERE A=?"
query?

Regards,
-- 
Ilya Kasnacheev


чт, 12 нояб. 2020 г. в 20:04, Lieuwe :

> I wonder if anyone can shed some light on the Apache Ignite performance I
> am
> seeing.
>
> I am running a single node & have a very simple CacheConfiguration
> consisting of 4 fields.
>
> The program is very much like the put-get-example code shipped with Ignite
> &
> I am doing a few tests to see how fast (how many transactions per second) I
> can read & write data to the cache.
>
> 1: Just incrementing the key and doing ignite::cache::Cache::Put(key,
> dataObject) I can push 100K entries in the cache at about 12K TPS
>
> 2: Doing the same for ignite::cache::Cache::Get(key) yields 150K TPS
>
> 3: I then use a ignite::cache::query::SqlFieldsQuery &
> ignite::cache::query::QueryFieldsCursor to do "SELECT A, B, C, D FROM
> MyCache WHERE _key = ?"
> Only doing cursor.isValid() && cursor.hasNext() yields 26K TPS
>
> 4: The last test I do is as above, but instead of the where clause being
> '_key = ?' .. I change this to 'A=?'. In other words I use one of the
> fields
> as a select criteria. I only get a shocking 20 TPS.
>
> Having an index on field A makes no difference. The size of the cache does
> -
> when I reduce that to a handful of entries that last rate will go up to
> about 2K TPS.
>
>
> My questions:
> - There seems to be a big difference between Put & Get .. is that normal?
> - There is also big difference between scenario 2 & 3 whilst they are
> essentially doing the same thing .. why is SQL having so much overhead? And
> example 3 doesn't even parse the columns out of the cursor whereas example
> 2
> gives me all 4 columns for the key.
> - And most importantly - why the shocking performance in scenario 4?
>
> Thanks
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: partition-exchanger system-critical thread blocked

2020-11-13 Thread Ilya Kasnacheev
Hello!

We need to see all the lines after the following (but before the next
Thread):

Line 41312: [2020-11-09T06:44:13,606][WARN
][tcp-disco-msg-worker-#2%EDIFCustomerCC%][G] Thread
[name="exchange-worker-#344%EDIFCustomerCC%", id=391, state=TIMED_WAITING,
blockCnt=16, waitCnt=2469710]
Line 43302: Thread
[name="exchange-worker-#344%EDIFCustomerCC%", id=391, state=RUNNABLE,
blockCnt=16, waitCnt=2469710]
Line 47326: [2020-11-09T10:55:18,888][WARN
][sys-stripe-118-#119%EDIFCustomerCC%][G] Thread
[name="exchange-worker-#344%EDIFCustomerCC%", id=391, state=TIMED_WAITING,
blockCnt=16, waitCnt=2469961]
Line 49473: Thread
[name="exchange-worker-#344%EDIFCustomerCC%", id=391, state=TIMED_WAITING,
blockCnt=16, waitCnt=2469961]

Regards,
-- 
Ilya Kasnacheev


чт, 12 нояб. 2020 г. в 19:57, Gangaiah Gundeboina :

> HI Ilya Kasnacheev,
>
> Below are log entries with  the tread name 'partition-exchanger'
>
> 
>
> Line 41311:
>
> [2020-11-09T06:44:13,605][ERROR][tcp-disco-msg-worker-#2%EDIFCustomerCC%][G]
> Blocked system-critical thread has been detected. This can lead to
> cluster-wide undefined behaviour [threadName=partition-exchanger,
> blockedFor=60s]
> Line 41315:
> [2020-11-09T06:44:13,606][ERROR][tcp-disco-msg-worker-#2%EDIFCustomerCC%][]
> Critical system error detected. Will be handled accordingly to configured
> handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
> [name=partition-exchanger, igniteInstanceName=EDIFCustomerCC,
> finished=false, heartbeatTs=1604884393601]]]
> Line 41316: org.apache.ignite.IgniteException: GridWorker
> [name=partition-exchanger, igniteInstanceName=EDIFCustomerCC,
> finished=false, heartbeatTs=1604884393601]
> Line 47325:
> [2020-11-09T10:55:18,888][ERROR][sys-stripe-118-#119%EDIFCustomerCC%][G]
> Blocked system-critical thread has been detected. This can lead to
> cluster-wide undefined behaviour [threadName=partition-exchanger,
> blockedFor=60s]
> Line 47329:
> [2020-11-09T10:55:18,889][ERROR][sys-stripe-118-#119%EDIFCustomerCC%][]
> Critical system error detected. Will be handled accordingly to configured
> handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
> [name=partition-exchanger, igniteInstanceName=EDIFCustomerCC,
> finished=false, heartbeatTs=1604899458881]]]
> Line 47330: org.apache.ignite.IgniteException: GridWorker
> [name=partition-exchanger, igniteInstanceName=EDIFCustomerCC,
> finished=false, heartbeatTs=1604899458881]
> #
>
>
> Below is the stack trace for 'exchange-worker-#344' worker thread,
>
> #
> Line 41109: [2020-11-09T04:11:42,068][INFO
> ][exchange-worker-#344%EDIFCustomerCC%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=2791, minorTopVer=0], force=false, evt=NODE_JOINED,
> node=f11cfea2-5ece-4867-b276-e94fa8458f47]
> Line 41116: [2020-11-09T04:11:52,060][INFO
> ][exchange-worker-#344%EDIFCustomerCC%][time] Started exchange init
> [topVer=AffinityTopologyVersion [topVer=2792, minorTopVer=0],
> mvccCrd=MvccCoordinator [nodeId=08260e5f-ae8d-44f2-b10a-dc3490133ee8,
> crdVer=1602946449957, topVer=AffinityTopologyVersion [topVer=6,
> minorTopVer=0]], mvccCrdChange=false, crd=false, evt=NODE_JOINED,
> evtNode=5c548b34-defa-4fcd-9bc7-364a4fbec8da, customEvt=null,
> allowMerge=true]
> Line 41117: [2020-11-09T04:11:52,062][INFO
> ][exchange-worker-#344%EDIFCustomerCC%][GridDhtPartitionsExchangeFuture]
> Finish exchange future [startVer=AffinityTopologyVersion [topVer=2792,
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=2792,
> minorTopVer=0],
> err=null]
> Line 41118: [2020-11-09T04:11:52,066][INFO
> ][exchange-worker-#344%EDIFCustomerCC%][GridDhtPartitionsExchangeFuture]
> Completed partition exchange
> [localNode=30b55ea5-18a4-4c15-b45f-7fe420ac00bd,
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=2792, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode
> [id=5c548b34-defa-4fcd-9bc7-364a4fbec8da, add

Re: Too long JVM pause out of nowhere leading into shutdowns of ignite-servers

2020-11-13 Thread Ilya Kasnacheev
Hello!

I'm afraid you're mostly on your own when it comes to ZooKeeper discovery.
The recommendations usually apply to TCP/IP Discovery.

for 3) I think it is correct to assume that ZooKeeper timeout (probably
configurable separately) is the culprit here, not the failure detection
timeout.

Regards,
-- 
Ilya Kasnacheev


пн, 9 нояб. 2020 г. в 17:02, VincentCE :

> Hi aealexsandrov respectively igniters,
>
> I would really appreciate to get some answers to my follow-up questions in
> particular to 4).
>
> Thanks a lot!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Cluster Issue on 2.7.6

2020-11-13 Thread Ilya Kasnacheev
Hello!

Yes, you absolutely do need to add all nodes to BLT.

Do you have steps to reproduce that behavior? The exception suggests that
you have started a node, activated it, then shut it down, started other two
nodes and activated them separately. Then tried to start all 3 nodes as a
single cluster.

Regards,
-- 
Ilya Kasnacheev


ср, 4 нояб. 2020 г. в 11:23, Gurmehar Kalra :

> Hi,
>
>
>
> Below are the logs
> BaselineTopology of joining node (e6d542e7-cd73-4e57-90c1-b28da508c2c6) is
> not compatible with BaselineTopology in the cluster. Branching history of
> cluster BlT ([1016056908]) doesn't contain branching point hash of joining
> node BlT (510622971). Consider cleaning persistent storage of the node and
> adding it to the cluster again.
>
>at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(
> *TcpDiscoverySpi.java:1946*) ~[ignite-core-2.7.6.jar:2.7.6]
>
>at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(
> *ServerImpl.java:969*) ~[ignite-core-2.7.6.jar:2.7.6]
>
>at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(
> *ServerImpl.java:391*) ~[ignite-core-2.7.6.jar:2.7.6]
>
>at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(
> *TcpDiscoverySpi.java:2020*) ~[ignite-core-2.7.6.jar:2.7.6]
>
>at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(
> *GridManagerAdapter.java:297*) ~[ignite-core-2.7.6.jar:2.7.6]
>
>... 52 common frames omitted
>
>
>
> Regards,
>
> Gurmehar Singh
>
>
>
> *From:* Gurmehar Kalra 
> *Sent:* 04 November 2020 13:34
> *To:* user@ignite.apache.org
> *Cc:* Andrei Aleksandrov 
> *Subject:* RE: Ignite Cluster Issue on 2.7.6
>
>
>
> [CAUTION: This Email is from outside the Organization. Unless you trust
> the sender, Don’t click links or open attachments as it may be a Phishing
> email, which can steal your Information and compromise your Computer.]
>
> Hi,
>
>  I did not remove the code *ignite**.cluster().active(**true**)* ,
> however added condition on one application to check if cluster is active or
> not and the other application activates the cluster.
>
>
>
> Regards,
>
> Gurmehar Singh
>
>
>
> *From:* Andrei Aleksandrov 
> *Sent:* 30 October 2020 20:01
> *To:* user@ignite.apache.org
> *Subject:* Re: Ignite Cluster Issue on 2.7.6
>
>
>
> [CAUTION: This Email is from outside the Organization. Unless you trust
> the sender, Don’t click links or open attachments as it may be a Phishing
> email, which can steal your Information and compromise your Computer.]
>
> Hi,
>
> Did you remove the code with ignite.cluster().active(*true*); ?
>
> However, yes, all of your data nodes should be in baseline topology. Could
> you collect logs from your servers?
>
> BR,
> Andrei
>
> 10/30/2020 2:28 PM, Gurmehar Kalra пишет:
>
> Hi,
>
>
>
> I tried changes suggested by you , waited for nodes  and then tried to
> start cluster , but only 1 node is  joins cluster other node  does not
> participates in cluster.
>
> Do I have to add all nodes into BLT ?
>
> Regards,
>
> Gurmehar Singh
>
>
>
> *From:* Andrei Aleksandrov 
> 
> *Sent:* 29 October 2020 20:11
> *To:* user@ignite.apache.org
> *Subject:* Re: Ignite Cluster Issue on 2.7.6
>
>
>
> [CAUTION: This Email is from outside the Organization. Unless you trust
> the sender, Don’t click links or open attachments as it may be a Phishing
> email, which can steal your Information and compromise your Computer.]
>
> Hi,
>
> Do you use cluster with persistence? After first actication all your data
> will be located on the first activated node.
>
> In this case, you also should track your baseline.
>
> https://www.gridgain.com/docs/latest/developers-guide/baseline-topology
> <https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.gridgain.com%2Fdocs%2Flatest%2Fdevelopers-guide%2Fbaseline-topology=04%7C01%7Cgurmehar.kalra%40hcl.com%7Cc975b5cbb3cf4d88ca6708d880984c02%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637400738885440196%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000=UIwjsZUzz2LQZfeSIFYL4zg3ybWNNEFfnBWXFm7TY8I%3D=0>
>
> Baseline topology is a subset of nodes where you cache data located.
>
> The recommendations are the following:
>
> 1)you should activate the cluster only when all server nodes were started
> 2)If the topology changes, you must either restore the failed nodes or
> reset to the base topology to trigger partition reassignment and
> rebalancing.
> 3)If some new node should contain the cache data then you should add this
> node to basel

Re: [2.9.0]Entryprocessor cannot be hot deployed properly via UriDeploymentSpi

2020-11-12 Thread Ilya Kasnacheev
Hello!

I suggest filing a feature request ticket against Apache Ignite JIRA. Best
if you provide some reproducer project.

https://issues.apache.org/jira/browse/IGNITE

You can also try some hybrid approach, such as firing a compute task from
the entry processor, that would be hot redeployed properly.

Regards,
-- 
Ilya Kasnacheev


чт, 12 нояб. 2020 г. в 15:46, 18624049226 <18624049...@163.com>:

> Hi Ilya,
>
> Updating the user version does not affect this issue.
>
> Adjusting the deploymentMode parameter also has no effect on this issue.
> 在 2020/11/12 下午7:39, Ilya Kasnacheev 写道:
>
> Hello!
>
> Did you try changing user version between deployments?
>
>
> https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading#un-deployment-and-user-versions
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 12 нояб. 2020 г. в 12:07, 18624049226 <18624049...@163.com>:
>
>> Hi Ilya,
>>
>> This issue exists in both versions 2.8 and 2.8.1.
>> 在 2020/11/11 下午10:05, Ilya Kasnacheev 写道:
>>
>> Hello!
>>
>> Did that work under 2.8? Can you check
>>
>> If it wasn't, then maybe it is not implemented in the first place. If it
>> is a regression, we could try to address that.
>>
>> Regards.
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 11 нояб. 2020 г. в 05:55, 18624049226 <18624049...@163.com>:
>>
>>> Any further conclusions?
>>>
>>> 在 2020/11/6 上午11:00, 18624049226 写道:
>>> > Hi community,
>>> >
>>> > Entryprocessor cannot be hot deployed properly via
>>> > UriDeploymentSpi,the operation steps are as follows:
>>> >
>>> > 1.put jar in the specified folder of uriList;
>>> >
>>> > 2.Use example-deploy.xml,start two ignite nodes;
>>> >
>>> > 3.Use the DeployClient to deploy the service named "deployService";
>>> >
>>> > 4.Execute the test through ThickClientTest, and the result is correct;
>>> >
>>> > 5.Modify the code of DeployServiceImpl and DeployEntryProcessor, for
>>> > example, change "Hello" to "Hi", then repackage it and put it into the
>>> > specified folder of uriList;
>>> >
>>> > 6.Redeploy services by RedeployClient;
>>> >
>>> > 7.Execute the test again through ThickClientTest, and the result is
>>> > incorrect,we will find that if the Entryprocessor accessed by the
>>> > service is on another node, the Entryprocessor uses the old version of
>>> > the class definition.
>>> >
>>> >
>>>
>>>


Re: Ignite 2.9 one way client to server communication

2020-11-12 Thread Ilya Kasnacheev
Hello!

This is correct, as long as you do not start any new caches, adding a thick
client should be PME-less.

Do you have logs from joining client and server node (coordinator, crd=true
if possible)?

Regards,
-- 
Ilya Kasnacheev


пн, 2 нояб. 2020 г. в 20:25, Hemambara :

> Thank you for response. If I understand correctly, so thick client
> connectivity time is not depending on # of server nodes, it all depends on
> PME length ? Can you please elaborate or point me to the resources where I
> can understand PME length ? I got few links on how PME works. But sorry I
> did not what is PME length. Also as per below reference, ignite thick
> client
> 2.8.0 does not trigger any PME. Is this right?
>
>
> https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [2.9.0]Entryprocessor cannot be hot deployed properly via UriDeploymentSpi

2020-11-12 Thread Ilya Kasnacheev
Hello!

Did you try changing user version between deployments?

https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading#un-deployment-and-user-versions

Regards,
-- 
Ilya Kasnacheev


чт, 12 нояб. 2020 г. в 12:07, 18624049226 <18624049...@163.com>:

> Hi Ilya,
>
> This issue exists in both versions 2.8 and 2.8.1.
> 在 2020/11/11 下午10:05, Ilya Kasnacheev 写道:
>
> Hello!
>
> Did that work under 2.8? Can you check
>
> If it wasn't, then maybe it is not implemented in the first place. If it
> is a regression, we could try to address that.
>
> Regards.
> --
> Ilya Kasnacheev
>
>
> ср, 11 нояб. 2020 г. в 05:55, 18624049226 <18624049...@163.com>:
>
>> Any further conclusions?
>>
>> 在 2020/11/6 上午11:00, 18624049226 写道:
>> > Hi community,
>> >
>> > Entryprocessor cannot be hot deployed properly via
>> > UriDeploymentSpi,the operation steps are as follows:
>> >
>> > 1.put jar in the specified folder of uriList;
>> >
>> > 2.Use example-deploy.xml,start two ignite nodes;
>> >
>> > 3.Use the DeployClient to deploy the service named "deployService";
>> >
>> > 4.Execute the test through ThickClientTest, and the result is correct;
>> >
>> > 5.Modify the code of DeployServiceImpl and DeployEntryProcessor, for
>> > example, change "Hello" to "Hi", then repackage it and put it into the
>> > specified folder of uriList;
>> >
>> > 6.Redeploy services by RedeployClient;
>> >
>> > 7.Execute the test again through ThickClientTest, and the result is
>> > incorrect,we will find that if the Entryprocessor accessed by the
>> > service is on another node, the Entryprocessor uses the old version of
>> > the class definition.
>> >
>> >
>>
>>


Re: L2-cache slow/not working as intended

2020-11-12 Thread Ilya Kasnacheev
Hello!

Then it should survive restart while keeping cache content. I'm not an
expert in Hibernate caching but that I would expect.

Regards,
-- 
Ilya Kasnacheev


чт, 12 нояб. 2020 г. в 12:58, Bastien Durel :

> Le mardi 10 novembre 2020 à 17:39 +0300, Ilya Kasnacheev a écrit :
> > Hello!
> >
> > You can make it semi-persistent by changing the internal Ignite node
> > type inside Hibernate to client (property clientMode=true) and
> > starting a few stand-alone nodes (one per each VM?)
> >
> > This way, its client will just connect to the existing cluster
> > with data already there.
> >
> > You can also enable Ignite persistence, but I assume that's not what
> > you want.
>
> Hello.
>
> Ignite is already started in client mode before initializing hibernate,
> and connected to a few stand-alone servers.
>
> Regards,
>
> --
> Bastien Durel
> DATA
> Intégration des données de l'entreprise,
> Systèmes d'information décisionnels.
>
> bastien.du...@data.fr
> tel : +33 (0) 1 57 19 59 28
> fax : +33 (0) 1 57 19 59 73
> 45 avenue Carnot, 94230 CACHAN France
> www.data.fr
>
>
>


Re: Query on IgniteApplication running on java11

2020-11-12 Thread Ilya Kasnacheev
Hello!

There may still be issues, such as showing -100% CPU load.

It's better to have all the required JVM options.

Regards,
-- 
Ilya Kasnacheev


чт, 12 нояб. 2020 г. в 05:42, vbm :

> Hi,
>
> In my machine jdk 11 is installed and I am trying to write an Ignite
> application to run on this environment.
>
> In my pom.xml, I have added below maven-compiler-plugin configuration and
> compiled the code.
>
> 
> org.apache.maven.plugins
> maven-compiler-plugin
> 3.8.0
> 
> 
> --add-exports
> java.base/jdk.internal.misc=ALL-UNNAMED
> --add-exports
> java.base/sun.nio.ch=ALL-UNNAMED
> --add-exports
>
> java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED
> --add-exports
>
> jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED
> 
> 
> 
>
>
> Now my question is do I need to use  the same JVM_OPTs when I start the
> client application like below:
> java $JVM_OPTS -cp  
>
> Currently I am not using the JVM_OPTS and the application is *running fine*
> but in below link it is mentioned it needs to be set.
>
> https://ignite.apache.org/docs/latest/quick-start/java#running-ignite-with-java-11-or-later
>
>
> May I know, will there be any issue if I do not use the JVM_OPTS ?
>
>
> Regards,
> Vishwas
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance of indexing on varchar columns without specific length

2020-11-11 Thread Ilya Kasnacheev
Hello!

The actual VARCHAR lengths means less than you think. Instead, you can
supply a correct INLINE_SIZE when creating this index:
https://ignite.apache.org/docs/latest/SQL/sql-tuning#increasing-index-inline-size
https://ignite.apache.org/docs/latest/SQL/indexes#configuring-index-inline-size

By default it's 10, and some of those bytes are used for length, etc.

Regards,
-- 
Ilya Kasnacheev


пт, 6 нояб. 2020 г. в 21:04, Shravya Nethula <
shravya.neth...@aline-consulting.com>:

> Hi,
>
> A table has two varchar columns, one column created with specific column
> length and other created without any specific length as shown below:
> CREATE TABLE person (id LONG PRIMARY KEY, name VARCHAR(64), last_name
> VARCHAR)
>
> Can we create index on varchar columns without any specific length? In the
> above scenario, can we create index on last_name column?
> And while creating index on those columns, will there be any performance
> difference on these columns?
>
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
>


Re: Workaround for getting ContinuousQuery to support transactions

2020-11-11 Thread Ilya Kasnacheev
Hello!

I think it's OK to try.

Regards,
-- 
Ilya Kasnacheev


пн, 9 нояб. 2020 г. в 19:56, ssansoy :

> interesting! might just work. We will try it out.
> E.g. A chance of 500 V's. V has fields a, b, c, (b=foo on all records) and
> some client app wants to run a continuous query on all V where b=foo, or
> was
> =foo but now is not following the update.
>
> The writer updates 100 V's, by setting b=bar on all records, and some
> incrementing version int N
> The datastreamer transformer mutates V by adding a new field called
> "changes" which contains b=foo to denote that only the field b was changed,
> and it's old value was foo. (e.g. a set of {fieldname, oldvalue}, { )
> The writer updates the V_signal cache to denote a change was made, with
> version N.
>
> The client continuous query listens to the V_signal cache. When it receives
> an update (denoting V updates have occurred), it does a scanquery on V in
> the transformer, (scan query filters the records that were updated as part
> of version N, and either the fields we care about match our predicate, or
> the "changes" field are one of the ones we are interested in and match the
> predicate).
>
> These are batched up as a collection and returned to the client. Does this
> seem like a reasonable approach?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to get column names for a query in Ignite thin client mode

2020-11-11 Thread Ilya Kasnacheev
Hello!

You can find out data types while using JDBC API, by using
ResultSetMetaData and DatabaseMetaData interfaces.

Regards,
-- 
Ilya Kasnacheev


пн, 9 нояб. 2020 г. в 10:42, Shravya Nethula <
shravya.neth...@aline-consulting.com>:

> Hi,
>
> Any update on this?
> Any help is much apppreciated!
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
> --
> *From:* Shravya Nethula 
> *Sent:* Thursday, November 5, 2020 4:54 PM
> *To:* user@ignite.apache.org 
> *Cc:* Bhargav Kunamneni 
> *Subject:* Re: How to get column names for a query in Ignite thin client
> mode
>
> Hi Alex,
>
> Thank you for the information.
> Is there a possibility of getting the datatypes in thick client mode?
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
> --
> *From:* Alex Plehanov 
> *Sent:* Thursday, November 5, 2020 12:03 PM
> *To:* user@ignite.apache.org 
> *Cc:* Bhargav Kunamneni 
> *Subject:* Re: How to get column names for a query in Ignite thin client
> mode
>
> Currently, only field names can be obtained, there is no information about
> field data types in thin client protocol.
>
> ср, 4 нояб. 2020 г. в 13:58, Shravya Nethula <
> shravya.neth...@aline-consulting.com>:
>
> Ilya and Alex,
>
> Thank you for information.
> Can you please also suggest how to get the datatypes of those columns
> obtained from the query?
>
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
>
> --
> *From:* Alex Plehanov 
> *Sent:* Tuesday, November 3, 2020 12:13 PM
> *To:* user@ignite.apache.org 
> *Subject:* Re: How to get column names for a query in Ignite thin client
> mode
>
> Columns information is read by thin-client only after the first data
> request, so you need to read at least one row to get columns.
>
> вт, 3 нояб. 2020 г. в 09:31, Ilya Kazakov :
>
> Hello, Shravya! It is very interesting! I am trying to reproduce your
> case, and what I see. I can see column names in the thin client only after
> query execution.
>
> For example:
>
> ClientConfiguration clientConfig = new 
> ClientConfiguration().setAddresses("127.0.0.1");
> try(IgniteClient thinClient = Ignition.startClient(clientConfig)){
> SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM T1");
> FieldsQueryCursor cursor = thinClient.query(sql);
> cursor.getAll();
> int count = cursor.getColumnsCount();
> System.out.println(count);
> List columnNames = new ArrayList<>();
> for (int i = 0; i < count; i++) {
> String columnName = cursor.getFieldName(i);
> columnNames.add(columnName);
> }
> System.out.println("columnNames:::"+columnNames);
> }
>
>
> But if this is the correct behavior I do not know yet, I will try to find
> out.
>
> 
> Ilya Kazakov
>
> вт, 3 нояб. 2020 г. в 12:51, Shravya Nethula <
> shravya.neth...@aline-consulting.com>:
>
> Hi,
>
> *For Ignite thick client, the column names for a given sql query are
> coming up as expected with the following code:*
> public class ClientNode {
>
> public static void main(String[] args) {
> IgniteConfiguration igniteCfg = new IgniteConfiguration();
> igniteCfg.setClientMode(true);
>
> Ignite ignite = Ignition.start(igniteCfg);
> *IgniteCache foo **= ignite.getOrCreateCache("foo");*
>
> SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM person");
> *FieldsQueryCursor cursor = foo.query(sql);*
> int count = cursor.getColumnsCount();
> List columnNames = new ArrayList<>();
>
> for (int i = 0; i < count; i++) {
>   String columnName = cursor.getFieldName(i);
>   columnNames.add(columnName);
> }
> System.out.println("columnNames:::"+columnNames);
>
>  } }
>  *Output:*
>  *columnNames:::[ID, NAME, LAST_NAME, AGE, CITY_ID, EMAIL_ID]
> *
> *On the other hand, for thin client, the column names are coming up as empty 
> list.*
> The following is the code:
> public class ClientNode {
>
> public static void main(String[] args) {
> ClientConfiguration clientConfig = new ClientConfiguration();
> cc.setUserName("username");
> cc.setUserPassword("password");
>
> *IgniteClient thinClient = Ignition.startClient(clientConfig);*
>
> SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * 

Re: partition-exchanger system-critical thread blocked

2020-11-11 Thread Ilya Kasnacheev
Hello!

Can you please find an actual stack trace from partition-exchanger thread
in that log?

One that starts with Thread [name="partition-exchanger" ?

Regards,
-- 
Ilya Kasnacheev


вт, 10 нояб. 2020 г. в 15:54, Kamlesh Joshi :

> Hi Igniters,
>
>
>
> Sometimes 'Blocked system-critical thread' errors are getting printed in
> prod cluster logs. As per below logs, it's saying exchange-worker blocked
> for 60s. There is no node join/left or cluster activation or cache
> creation, still why partition exchange is triggered. Even it is triggered,
>  it is blocked for *60s *which is huge time from prod perspective.
>
>
>
> Below are error details,
>
> [2020-11-09T10:55:18,888][ERROR][sys-stripe-118-#119%EDIFCustomerCC%][G]
> Blocked system-critical thread has been detected. This can lead to
> cluster-wide undefined behaviour [threadName=partition-exchanger,
> *blockedFor=60s*]
>
> [2020-11-09T10:55:18,888][WARN ][sys-stripe-118-#119%EDIFCustomerCC%][G]
> Thread [name="exchange-worker-#344%EDIFCustomerCC%", id=391,
> state=TIMED_WAITING, blockCnt=16, waitCnt=2469961]
>
>
>
> Cluster is responding but these errors priting, we are not understanding
> what's the cause, could you please help us.
>
>
>
> Below is log snippet,
>
>
>
> [2020-11-09T10:51:21,458][INFO
> ][db-checkpoint-thread-#384%EDIFCustomerCC%][GridCacheDatabaseSharedManager]
> Checkpoint started [checkpointId=3bd28c8f-d9c9-4110-ab8c-24cf3a5c44a3,
> startPtr=FileWALPointer [idx=1279664, fileOff=28148165, len=49557],
> checkpointLockWait=0ms, checkpointLockHoldTime=34ms,
> walCpRecordFsyncDuration=7ms, pages=84152, reason='timeout']
>
> [2020-11-09T10:51:23,499][INFO
> ][db-checkpoint-thread-#384%EDIFCustomerCC%][GridCacheDatabaseSharedManager]
> Checkpoint finished [cpId=3bd28c8f-d9c9-4110-ab8c-24cf3a5c44a3,
> pages=84152, markPos=FileWALPointer [idx=1279664, fileOff=28148165,
> len=49557], walSegmentsCleared=5, walSegmentsCovered=[1279658 - 1279663],
> markDuration=79ms, pagesWrite=1195ms, fsync=845ms, total=2119ms]
>
> [2020-11-09T10:51:36,788][INFO
> ][wal-file-archiver%EDIFCustomerCC-#345%EDIFCustomerCC%][FileWriteAheadLogManager]
> Starting to copy WAL segment [absIdx=1279664, segIdx=4,
> origFile=/datastore1/wal/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/0004.wal,
> dstFile=/datastore1/archive/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/01279664.wal]
>
> [2020-11-09T10:51:36,954][INFO
> ][wal-file-archiver%EDIFCustomerCC-#345%EDIFCustomerCC%][FileWriteAheadLogManager]
> Copied file
> [src=/datastore1/wal/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/0004.wal,
> dst=/datastore1/archive/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/01279664.wal]
>
> [2020-11-09T10:52:02,018][INFO
> ][wal-file-archiver%EDIFCustomerCC-#345%EDIFCustomerCC%][FileWriteAheadLogManager]
> Starting to copy WAL segment [absIdx=1279665, segIdx=5,
> origFile=/datastore1/wal/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/0005.wal,
> dstFile=/datastore1/archive/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/01279665.wal]
>
> [2020-11-09T10:52:02,200][INFO
> ][wal-file-archiver%EDIFCustomerCC-#345%EDIFCustomerCC%][FileWriteAheadLogManager]
> Copied file
> [src=/datastore1/wal/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/0005.wal,
> dst=/datastore1/archive/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/01279665.wal]
>
> [2020-11-09T10:52:36,541][INFO
> ][wal-file-archiver%EDIFCustomerCC-#345%EDIFCustomerCC%][FileWriteAheadLogManager]
> Starting to copy WAL segment [absIdx=1279666, segIdx=6,
> origFile=/datastore1/wal/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/0006.wal,
> dstFile=/datastore1/archive/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/01279666.wal]
>
> [2020-11-09T10:52:36,703][INFO
> ][wal-file-archiver%EDIFCustomerCC-#345%EDIFCustomerCC%][FileWriteAheadLogManager]
> Copied file
> [src=/datastore1/wal/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/0006.wal,
> dst=/datastore1/archive/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/01279666.wal]
>
> [2020-11-09T10:53:11,068][INFO
> ][wal-file-archiver%EDIFCustomerCC-#345%EDIFCustomerCC%][FileWriteAheadLogManager]
> Starting to copy WAL segment [absIdx=1279667, segIdx=7,
> origFile=/datastore1/wal/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/0007.wal,
> dstFile=/datastore1/archive/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/01279667.wal]
>
> [2020-11-09T10:53:11,239][INFO
> ][wal-file-archiver%EDIFCustomerCC-#345%EDIFCustomerCC%][FileWriteAheadLogManager]
> Copied file
> [src=/datastore1/wal/node00-651e4920-2fb4-4dd5-9258-52a9f359ac35/0007.wal,
> dst=/datastore1/archive/node00-65

Re: Limit ignite-rest-http threads

2020-11-11 Thread Ilya Kasnacheev
Hello!

I guess you can supply your own jetty XML configuration (in
ClientConnectorConfiguration) and there you can limit # of its threads.

Please refer to Jetty's docs.

Regards,
-- 
Ilya Kasnacheev


пн, 9 нояб. 2020 г. в 14:37, ashishb888 :

> Hi Vladimir,
>
> I want to limit those threads. I want to control the threads size for
> ignite-rest-http.
>
> BR,
> Ashish
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there any change to integrate ignite into Android environment?

2020-11-11 Thread Ilya Kasnacheev
Hello!

Did you actually try to do that? I recommend starting with some flavor of
thin client since it is much simpler.

Regards,
-- 
Ilya Kasnacheev


вт, 10 нояб. 2020 г. в 10:37, xingjl6280 :

> Hi Denis,
>
> It's a Android Pad, with 8 cores CPU and 6GB RAM.
>
> We use the Pad to connect servers and some devices with Ubuntu running.
> It's
> only working in a stable LAN environgment.
>
> Is it possible?
>
> thank you
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [2.9.0]Entryprocessor cannot be hot deployed properly via UriDeploymentSpi

2020-11-11 Thread Ilya Kasnacheev
Hello!

Did that work under 2.8? Can you check

If it wasn't, then maybe it is not implemented in the first place. If it is
a regression, we could try to address that.

Regards.
-- 
Ilya Kasnacheev


ср, 11 нояб. 2020 г. в 05:55, 18624049226 <18624049...@163.com>:

> Any further conclusions?
>
> 在 2020/11/6 上午11:00, 18624049226 写道:
> > Hi community,
> >
> > Entryprocessor cannot be hot deployed properly via
> > UriDeploymentSpi,the operation steps are as follows:
> >
> > 1.put jar in the specified folder of uriList;
> >
> > 2.Use example-deploy.xml,start two ignite nodes;
> >
> > 3.Use the DeployClient to deploy the service named "deployService";
> >
> > 4.Execute the test through ThickClientTest, and the result is correct;
> >
> > 5.Modify the code of DeployServiceImpl and DeployEntryProcessor, for
> > example, change "Hello" to "Hi", then repackage it and put it into the
> > specified folder of uriList;
> >
> > 6.Redeploy services by RedeployClient;
> >
> > 7.Execute the test again through ThickClientTest, and the result is
> > incorrect,we will find that if the Entryprocessor accessed by the
> > service is on another node, the Entryprocessor uses the old version of
> > the class definition.
> >
> >
>
>


Re: Client App Object Allocation Rate

2020-11-10 Thread Ilya Kasnacheev
Hello!

Good question. Did something at all change after you set it? I'm not sure
why the message is so large in your cases, it's tens of kb.

Regards,
-- 
Ilya Kasnacheev


вт, 10 нояб. 2020 г. в 20:27, ssansoy :

> Yep theyre the ones we'd like to turn off... is that possible with
> IGNITE_DISCOVERY_DISABLE_CACHE_METRICS_UPDATE=true? it doesn't seem to have
> an effect
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Client App Object Allocation Rate

2020-11-10 Thread Ilya Kasnacheev
Hello!

Yes, it's not cache statistics but node statistics
(org.apache.ignite.cluster.ClusterMetrics)

Regards,
-- 
Ilya Kasnacheev


пн, 9 нояб. 2020 г. в 21:09, ssansoy :

> Also according to the heap dump they aren't cache statistic messages, but
> rather, TcpDiscoveryClientMetricsUpdateMessage
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: L2-cache slow/not working as intended

2020-11-10 Thread Ilya Kasnacheev
Hello!

You can make it semi-persistent by changing the internal Ignite node type
inside Hibernate to client (property clientMode=true) and starting a few
stand-alone nodes (one per each VM?)

This way, its client will just connect to the existing cluster with data
already there.

You can also enable Ignite persistence, but I assume that's not what you
want.

Regards,
-- 
Ilya Kasnacheev


пн, 9 нояб. 2020 г. в 20:05, Bastien Durel :

> Le lundi 09 novembre 2020 à 19:11 +0300, Ilya Kasnacheev a écrit :
> > Hello!
> >
> > Why Hibernate won't use it for reads of that user, I don't know, it's
> > outside of scope of Ignite.
> >
> > Putting 1,000,000 records in 5 minutes sounds reasonable, especially
> > since L2 population is optimized for latency, not throughput (as
> > opposed to e.g. CacheLoader).
>
> Hello,
>
> I'm OK if the L2C make 5 minutes to load (as I said, there will
> probably never be such a query in the real application), the real
> problem here is that this cache does not persist between Sessions, and
> therefore is recreated each time.
>
> It may be a configuration problem, but reading [1], I cannot find why
>
> Regards,
>
> [1]
> https://ignite.apache.org/docs/latest/extensions-and-integrations/hibernate-l2-cache
>
> --
> Bastien Durel
> DATA
> Intégration des données de l'entreprise,
> Systèmes d'information décisionnels.
>
> bastien.du...@data.fr
> tel : +33 (0) 1 57 19 59 28
> fax : +33 (0) 1 57 19 59 73
> 45 avenue Carnot, 94230 CACHAN France
> www.data.fr
>
>
>


Re: L2-cache slow/not working as intended

2020-11-09 Thread Ilya Kasnacheev
Hello!

Why Hibernate won't use it for reads of that user, I don't know, it's
outside of scope of Ignite.

Putting 1,000,000 records in 5 minutes sounds reasonable, especially since
L2 population is optimized for latency, not throughput (as opposed to e.g.
CacheLoader).

Regards,
-- 
Ilya Kasnacheev


пн, 9 нояб. 2020 г. в 14:30, Bastien Durel :

> Le lundi 09 novembre 2020 à 14:09 +0300, Ilya Kasnacheev a écrit :
> > Hello!
> > Putting 1 million entries of a single query in L2 cache does not
> > sound like a reasonable use of L2 cache.
>
> Hello.
>
> No one will probably read the whole Event database at once with the
> product, but it was a read-speed test, so we needed some big chunk data
> to see problems ... You can see on the other post I made the L2C does
> not even caches the only one User I have in my test db.
>
> Regards,
>
> --
> Bastien Durel
> DATA
> Intégration des données de l'entreprise,
> Systèmes d'information décisionnels.
>
> bastien.du...@data.fr
> tel : +33 (0) 1 57 19 59 28
> fax : +33 (0) 1 57 19 59 73
> 45 avenue Carnot, 94230 CACHAN France
> www.data.fr
>
>
>


Re: Workaround for getting ContinuousQuery to support transactions

2020-11-09 Thread Ilya Kasnacheev
Hello!

You can have a transformer on your data streamer to do something to the old
value (e.g. keep some fields of it in new value V).

Regards,
-- 
Ilya Kasnacheev


пн, 9 нояб. 2020 г. в 14:52, ssansoy :

> Thanks for this,
>
> We considering this approach - writing all the entries to some table V, and
> then updating a separate token cache T with a signal, picked up by the
> continuous query, which then filters the underlying V records, transforms
> them and sends them to the client.
>
> However, one problem we ran into - is we lose the "old" values from the
> underlying table V. Normally the continuous query has access to the new
> value and the old value, so the client app can detect which entries no
> longer match the remote filter. With this technique however, the continuous
> query remote transformer has the old and new value of T, but is ultimately
> doing a ScanQuery on V to get all the "current" values.
>
> Do you have any advice on how we can still achieve that?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Client App Object Allocation Rate

2020-11-09 Thread Ilya Kasnacheev
Hello!

I'm actually not convinced that 6k/sec is a lot. Metrics update messages
are passed between nodes to calculate cluster-wide cache metrics.

Have you tried turning them off by setting *IGNITE*_*DISCOVERY*_*DISABLE*
_CACHE_*METRICS*_UPDATE=true, in form of system property or env var?

Regards,
-- 
Ilya Kasnacheev

чт, 5 нояб. 2020 г. в 16:03, ssansoy :

> Hi was there any update on this? thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: L2-cache slow/not working as intended

2020-11-09 Thread Ilya Kasnacheev
Hello!

Putting 1 million entries of a single query in L2 cache does not sound like
a reasonable use of L2 cache.

Regards,
-- 
Ilya Kasnacheev


чт, 5 нояб. 2020 г. в 12:59, Bastien Durel :

> Hello,
>
> I'm using an ignite cluster to back an hibernate-based application. I
> configured L2-cache as explained in
>
> https://ignite.apache.org/docs/latest/extensions-and-integrations/hibernate-l2-cache
>
> (config below)
>
> I've ran a test reading a 1M-elements cache with a consumer counting
> elements. It's very slow : more than 5 minutes to run.
>
> Session metrics says it was the LC2 puts that takes most time (5
> minutes and 3 seconds of a 5:12" operation)
>
> INFO  [2020-11-05 09:51:15,694]
> org.hibernate.engine.internal.StatisticalLoggingSessionEventListener:
> Session Metrics {
> 33350 nanoseconds spent acquiring 1 JDBC connections;
> 25370 nanoseconds spent releasing 1 JDBC connections;
> 571572 nanoseconds spent preparing 1 JDBC statements;
> 1153110307 nanoseconds spent executing 1 JDBC statements;
> 0 nanoseconds spent executing 0 JDBC batches;
> 303191158712 nanoseconds spent performing 100 L2C puts;
> 23593547 nanoseconds spent performing 1 L2C hits;
> 0 nanoseconds spent performing 0 L2C misses;
> 370656057 nanoseconds spent executing 1 flushes (flushing a total of
> 101 entities and 2 collections);
> 4684 nanoseconds spent executing 1 partial-flushes (flushing a total
> of 0 entities and 0 collections)
> }
>
> It seems long, event for 1M puts, but ok, let's say the L2C is
> initialized now, and it will be better next time ? So I ran the query
> again, but it took 5+ minutes again ...
>
> INFO  [2020-11-05 09:58:02,538]
> org.hibernate.engine.internal.StatisticalLoggingSessionEventListener:
> Session Metrics {
> 28982 nanoseconds spent acquiring 1 JDBC connections;
> 25974 nanoseconds spent releasing 1 JDBC connections;
> 52468 nanoseconds spent preparing 1 JDBC statements;
> 1145821128 nanoseconds spent executing 1 JDBC statements;
> 0 nanoseconds spent executing 0 JDBC batches;
> 303763054228 nanoseconds spent performing 100 L2C puts;
> 1096985 nanoseconds spent performing 1 L2C hits;
> 0 nanoseconds spent performing 0 L2C misses;
> 317558122 nanoseconds spent executing 1 flushes (flushing a total of
> 101 entities and 2 collections);
> 5500 nanoseconds spent executing 1 partial-flushes (flushing a total
> of 0 entities and 0 collections)
> }
>
> Why does the L2 cache had to be filled again ? Isn't his purpose was to
> share it between Sessions ?
>
> Actually, disabling it make the test runs in less that 6 seconds.
>
> Why is L2C working that way ?
>
> Regards,
>
>
> **
>
> I'm running 2.9.0 from Debian package
>
> Hibernate properties :
> hibernate.cache.use_second_level_cache: true
> hibernate.generate_statistics: true
> hibernate.cache.region.factory_class:
> org.apache.ignite.cache.hibernate.HibernateRegionFactory
> org.apache.ignite.hibernate.ignite_instance_name: ClusterWA
> org.apache.ignite.hibernate.default_access_type: READ_ONLY
>
> Method code:
> @GET
> @Timed
> @UnitOfWork
> @Path("/events/speed")
> public Response getAllEvents(@Auth AuthenticatedUser auth) {
> AtomicLong id = new AtomicLong();
> StopWatch watch = new StopWatch();
> watch.start();
> evtDao.findAll().forEach(new Consumer() {
>
> @Override
> public void accept(Event t) {
> long cur = id.incrementAndGet();
> if (cur % 65536 == 0)
> logger.debug("got element#{}",
> cur);
> }
> });
> watch.stop();
> return Response.ok().header("X-Count",
> Long.toString(id.longValue())).entity(new Time(watch)).build();
> }
>
> Event cache config:
>
>
> 
> class="org.apache.ignite.configuration.CacheConfiguration">
>  value="EventCache" />
>  value="PARTITIONED" />
>  value="TRANSACTIONAL" />
>  />
>

Re: Workaround for getting ContinuousQuery to support transactions

2020-11-09 Thread Ilya Kasnacheev
Hello!

After you flush the data streamer, you may put a token entry into a small
cache to be picked by continuous query. When query handler is triggered,
all the data will already be available from caches.

The difference with transactional behavior is that transactions promise
(and fail to deliver) "at the same time" guarantee, whilst data streamer
will deliver on "after" guarantee.

Regards,
-- 
Ilya Kasnacheev


пт, 6 нояб. 2020 г. в 19:16, ssansoy :

> Yeah the key thing is to be able to be notified when all records have been
> updated in the cache.
> We've tried using IgniteLock or this too by the way (e.g. the writer locks,
> writes the records, unlocks).
>
> Then the client app, internally queues all updates as they arrive from the
> continuous query. If it can acquire the lock, then it knows all updates
> have
> arrived (because the writer has finished). However this doesn't work
> either,
> because even though the writer has unlocked, the written records are still
> in transit to the continuous query (e.g. they don't exist in the internal
> client side queue yet)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failed to start SPI using custom class loader

2020-11-09 Thread Ilya Kasnacheev
Hello!

Do you have a reproducer for this issue for us to try?

Regards,
-- 
Ilya Kasnacheev


сб, 7 нояб. 2020 г. в 21:27, Paolo Di Tommaso :

> Hello,
>
> I'm experiencing a weird error while launching Ignite with a custom
> classloader.
>
> It fails to join the cluster with the error reported below.
>
> It reports that cannot load the class  `DiscoveryDataClusterState` using
> the classloader `sun.misc.Launcher$AppClassLoader@27716f4`.
>
> I think this is the problem because it should use the classloader that
> I've specified via the IgniteConfiguration.setClassLoader method, however,
> it seems that it still tries to use the default one.
>
>
> Caused by: org.apache.ignite.IgniteCheckedException: Failed to start SPI:
> TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000,
> marsh=JdkMarshaller 
> [clsFilter=org.apache.ignite.internal.IgniteKernal$5@5c059a68],
> reconCnt=10, reconDelay=2000, maxAckTimeout=60, forceSrvMode=false,
> clientReconnectDisabled=false]
> at
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:892)
> at
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1669)
> ... 57 common frames omitted
> Caused by: org.apache.ignite.spi.IgniteSpiException: Error on
> unmarshalling discovery data from node 0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.
> 168.1.129:47501: Failed to find class with given class loader for
> unmarshalling (make sure same versions of all classes are available on all
> nodes or enable peer-class-loading) [clsLdr=sun.misc.Launcher
> $AppClassLoader@27716f4, cls=
> org.apache.ignite.internal.processors.cluster.DiscoveryDataClusterState];
> node is not allowed to join
> at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1856)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:932)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:364)
> at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1930)
> at
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
> ... 59 common frames omitted
>
>
> Any clue what's wrong?
>
>
> Paolo
>
>
>


Re: Workaround for getting ContinuousQuery to support transactions

2020-11-06 Thread Ilya Kasnacheev
Hello!

It may handle your use case (doing something when all records are in cache).
But it will not fix the tool that you're using for it (continuous query
with expectation of batched handling).

Regards,
-- 
Ilya Kasnacheev


пт, 6 нояб. 2020 г. в 16:41, ssansoy :

> Ah ok so this wouldn't help solve our problem?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Workaround for getting ContinuousQuery to support transactions

2020-11-06 Thread Ilya Kasnacheev
Hello!

No, it does not mean anything about the continuous query listener.

But it means that once a data streamer is flushed, all data is available in
caches.

Regards,
-- 
Ilya Kasnacheev


вт, 3 нояб. 2020 г. в 16:28, ssansoy :

> Thanks,
> How is this different to multiple puts inside a transaction?
>
> By using the data streamer to write the records, does that mean the
> continuous query will receive all 10,000 records in one go in the local
> listen?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failed to Resolve NodeTopology - ignite 2.8.1

2020-11-03 Thread Ilya Kasnacheev
Hello!

This is such a small value for volatile topologies such as yours. I
recommend switching to 2.9.0 and then changing this number to 200.

Regards,
-- 
Ilya Kasnacheev


вт, 3 нояб. 2020 г. в 21:40, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:

> Yes. I have IGNITE_EXCHANGE_HISTORY_SIZE set to 10.
>
> *should I set IGNITE_EXCHANGE_HISTORY_SIZE = 0 ???*
>
> *will file a bug shortly*
>
>
>
> ------
> *From:* Ilya Kasnacheev 
> *Sent:* Tuesday, November 3, 2020 8:57 PM
> *To:* user@ignite.apache.org 
> *Subject:* Re: Failed to Resolve NodeTopology - ignite 2.8.1
>
> Hello!
>
> You seemed to have a very old transaction which tried to access a topology
> version which was not in the history. Do you have
> IGNITE_EXCHANGE_HISTORY_SIZE set?
>
> I think this is a bug that it causes node failure. I would expect that
> transaction is killed, that's all. Can you please file a ticket about this
> issue against Apache Ignite JIRA?
>
> https://issues.apache.org/jira/projects/IGNITE
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 3 нояб. 2020 г. в 05:59, Mahesh Renduchintala <
> mahesh.renduchint...@aline-consulting.com>:
>
> Hi,
>
> We saw all ignite nodes crash today morning. Below are the error logs?
> why would "Failed to Resolve Node Topology" occur?
> what would cause this?
> If there is a network disturbance, should I not get some sort of
> segmentation error?
>
>
>
>
> [14:30:10,095][INFO][grid-timeout-worker-#43][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=1ec1e3f9, uptime=1 day, 03:54:03.143]
> ^-- H/N/C [hosts=11, nodes=11, CPUs=46]
> ^-- CPU [cur=0.93%, avg=7.03%, GC=0%]
> ^-- PageMemory [pages=11582153]
> ^-- Heap [used=25392MB, free=48.34%, comm=49152MB]
> ^-- Off-heap [used=45772MB, free=30.47%, comm=65736MB]
> ^--   sysMemPlc region [used=0MB, free=99.98%, comm=100MB]
> ^--   default region [used=45771MB, free=30.16%, comm=65536MB]
> ^--   metastoreMemPlc region [used=0MB, free=99.03%, comm=0MB]
> ^--   TxLog region [used=0MB, free=100%, comm=100MB]
> ^-- Ignite persistence [used=53690MB]
> ^--   sysMemPlc region [used=0MB]
> ^--   default region [used=53689MB]
> ^--   metastoreMemPlc region [used=0MB]
> ^--   TxLog region [used=0MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=2, qSize=0]
> ^-- System thread pool [active=0, idle=32, qSize=0]
> [14:30:10,976][SEVERE][sys-stripe-17-#18][GridCacheIoManager] Failed
> processing message [senderId=8f0d3c00-7b18-456c-9066-3852abca7254,
> msg=GridNearTxPrepareRequest
> [futId=196a4f48571-8d97cb24-fd81-45b2-a60f-60c26db22c90, miniId=1,
> topVer=AffinityTopologyVersion [topVer=158, minorTopVer=2],
> subjId=8f0d3c00-7b18-456c-9066-3852abca7254, taskNameHash=0, txLbl=null,
> flags=[firstClientReq][implicitSingle],
> super=GridDistributedTxPrepareRequest [threadId=2651,
> concurrency=OPTIMISTIC, isolation=READ_COMMITTED, writeVer=GridCacheVersion
> [topVer=215702708, order=1604374451397, nodeOrder=21], timeout=42,
> reads=null, writes=ArrayList [IgniteTxEntry [txKey=null,
> val=CacheObjectImpl [val=null, hasValBytes=true][op=CREATE, val=],
> prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null],
> entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, conflictVer=null,
> explicitVer=null, dhtVer=null, filters=CacheEntryPredicate[] [],
> filtersPassed=false, filtersSet=false, entry=null, prepared=0,
> locked=false, nodeId=null, locMapped=false, expiryPlc=null,
> transferExpiryPlc=false, flags=0, partUpdateCntr=0, serReadVer=null,
> xidVer=null]], dhtVers=null, txSize=0, plc=2, txState=null,
> flags=onePhase|last, super=GridDistributedBaseMessage [ver=GridCacheVersion
> [topVer=215702708, order=1604374451397, nodeOrder=21], committedVers=null,
> rolledbackVers=null, cnt=0, super=GridCacheIdMessage [cacheId=0,
> super=GridCacheMessage [msgId=2767960, depInfo=null,
> lastAffChangedTopVer=AffinityTopologyVersion [topVer=158, minorTopVer=2],
> err=null, skipPrepare=false]]
> class org.apache.ignite.IgniteException: Failed to resolve nodes topology
> [cacheGrp=DataStructure_DisHashMap, topVer=AffinityTopologyVersion
> [topVer=158, minorTopVer=2], history=[AffinityTopologyVersion [topVer=218,
> minorTopVer=0], AffinityTopologyVersion [topVer=219, minorTopVer=0],
> AffinityTopologyVersion [topVer=220, minorTopVer=0],
> AffinityTopologyVersion [topVer=221, minorTopVer=0],
> AffinityTopologyVersion [topVer=222, minorTopVer=0],
> AffinityTopologyVersion [topVer=223, minorTopVer=0],
> AffinityTopologyVersion [topVer=224, minorTopVer=0],
> Aff

Re: Failed to Resolve NodeTopology - ignite 2.8.1

2020-11-03 Thread Ilya Kasnacheev
Hello!

You seemed to have a very old transaction which tried to access a topology
version which was not in the history. Do you have
IGNITE_EXCHANGE_HISTORY_SIZE set?

I think this is a bug that it causes node failure. I would expect that
transaction is killed, that's all. Can you please file a ticket about this
issue against Apache Ignite JIRA?

https://issues.apache.org/jira/projects/IGNITE

Regards,
-- 
Ilya Kasnacheev


вт, 3 нояб. 2020 г. в 05:59, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:

> Hi,
>
> We saw all ignite nodes crash today morning. Below are the error logs?
> why would "Failed to Resolve Node Topology" occur?
> what would cause this?
> If there is a network disturbance, should I not get some sort of
> segmentation error?
>
>
>
>
> [14:30:10,095][INFO][grid-timeout-worker-#43][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=1ec1e3f9, uptime=1 day, 03:54:03.143]
> ^-- H/N/C [hosts=11, nodes=11, CPUs=46]
> ^-- CPU [cur=0.93%, avg=7.03%, GC=0%]
> ^-- PageMemory [pages=11582153]
> ^-- Heap [used=25392MB, free=48.34%, comm=49152MB]
> ^-- Off-heap [used=45772MB, free=30.47%, comm=65736MB]
> ^--   sysMemPlc region [used=0MB, free=99.98%, comm=100MB]
> ^--   default region [used=45771MB, free=30.16%, comm=65536MB]
> ^--   metastoreMemPlc region [used=0MB, free=99.03%, comm=0MB]
> ^--   TxLog region [used=0MB, free=100%, comm=100MB]
> ^-- Ignite persistence [used=53690MB]
> ^--   sysMemPlc region [used=0MB]
> ^--   default region [used=53689MB]
> ^--   metastoreMemPlc region [used=0MB]
> ^--   TxLog region [used=0MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=2, qSize=0]
> ^-- System thread pool [active=0, idle=32, qSize=0]
> [14:30:10,976][SEVERE][sys-stripe-17-#18][GridCacheIoManager] Failed
> processing message [senderId=8f0d3c00-7b18-456c-9066-3852abca7254,
> msg=GridNearTxPrepareRequest
> [futId=196a4f48571-8d97cb24-fd81-45b2-a60f-60c26db22c90, miniId=1,
> topVer=AffinityTopologyVersion [topVer=158, minorTopVer=2],
> subjId=8f0d3c00-7b18-456c-9066-3852abca7254, taskNameHash=0, txLbl=null,
> flags=[firstClientReq][implicitSingle],
> super=GridDistributedTxPrepareRequest [threadId=2651,
> concurrency=OPTIMISTIC, isolation=READ_COMMITTED, writeVer=GridCacheVersion
> [topVer=215702708, order=1604374451397, nodeOrder=21], timeout=42,
> reads=null, writes=ArrayList [IgniteTxEntry [txKey=null,
> val=CacheObjectImpl [val=null, hasValBytes=true][op=CREATE, val=],
> prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null],
> entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, conflictVer=null,
> explicitVer=null, dhtVer=null, filters=CacheEntryPredicate[] [],
> filtersPassed=false, filtersSet=false, entry=null, prepared=0,
> locked=false, nodeId=null, locMapped=false, expiryPlc=null,
> transferExpiryPlc=false, flags=0, partUpdateCntr=0, serReadVer=null,
> xidVer=null]], dhtVers=null, txSize=0, plc=2, txState=null,
> flags=onePhase|last, super=GridDistributedBaseMessage [ver=GridCacheVersion
> [topVer=215702708, order=1604374451397, nodeOrder=21], committedVers=null,
> rolledbackVers=null, cnt=0, super=GridCacheIdMessage [cacheId=0,
> super=GridCacheMessage [msgId=2767960, depInfo=null,
> lastAffChangedTopVer=AffinityTopologyVersion [topVer=158, minorTopVer=2],
> err=null, skipPrepare=false]]
> class org.apache.ignite.IgniteException: Failed to resolve nodes topology
> [cacheGrp=DataStructure_DisHashMap, topVer=AffinityTopologyVersion
> [topVer=158, minorTopVer=2], history=[AffinityTopologyVersion [topVer=218,
> minorTopVer=0], AffinityTopologyVersion [topVer=219, minorTopVer=0],
> AffinityTopologyVersion [topVer=220, minorTopVer=0],
> AffinityTopologyVersion [topVer=221, minorTopVer=0],
> AffinityTopologyVersion [topVer=222, minorTopVer=0],
> AffinityTopologyVersion [topVer=223, minorTopVer=0],
> AffinityTopologyVersion [topVer=224, minorTopVer=0],
> AffinityTopologyVersion [topVer=225, minorTopVer=0],
> AffinityTopologyVersion [topVer=226, minorTopVer=0],
> AffinityTopologyVersion [topVer=227, minorTopVer=0]], snap=Snapshot
> [topVer=AffinityTopologyVersion [topVer=227, minorTopVer=0]],
> locNode=TcpDiscoveryNode [id=1ec1e3f9-4c61-4316-9db3-5b35379570ab,
> consistentId=afc388a3-aa34-4553-9b62-1ea36657feb0, addrs=ArrayList
> [192.168.1.6], sockAddrs=HashSet [/192.168.1.6:47500], discPort=47500,
> order=2, intOrder=2, lastExchangeTime=1604224232725, loc=true,
> ver=2.8.1#20200521-sha1:86422096, isClient=false]]
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.resolveDiscoCache(GridDiscoveryManager.java:1999)

Re: Ignite JDBC connection pooling mechanism

2020-11-03 Thread Ilya Kasnacheev
Hello!

Are you sure that the Ignite cluster is in fact up? :)

If it is, maybe your usage patterns of this pool somehow assign the
connection to two different threads, which try to do queries in parallel.
In theory, this is what connection pools are explicitly created to avoid,
but maybe there's some knob you have to turn to actually make them
thread-exclusive.

Also, does it happen every time? How soon would it happen?

Regards,
-- 
Ilya Kasnacheev


пн, 2 нояб. 2020 г. в 12:31, Sanjaya :

> Hi All,
>
> we are trying to use HIkari connection pooling with ignite JdbcThinDriver.
> we are facing issue as
>
>
> Any idea what is the supported connection pooling mechanism work with
> IgniteThinDriver
>
>
> ERROR LOG
> ==
>
> WARN  com.zaxxer.hikari.pool.ProxyConnection.157 sm-event-consumer prod
> sm-event-consumer-v1-55f4db767d-2kskt - HikariPool-1 - Connection
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection@68f0e2a1 marked as
> broken because of SQLSTATE(08006), ErrorCode(0)
>
> java.sql.SQLException: Failed to communicate with Ignite cluster.
>
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:760)
>
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeBatch(JdbcThinStatement.java:651)
>
> at
> com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128)
>
> at
>
> com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
>
> at
>
> org.springframework.jdbc.core.JdbcTemplate.lambda$batchUpdate$2(JdbcTemplate.java:950)
>
> at
> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:617)
>
> at
> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:647)
>
> at
>
> org.springframework.jdbc.core.JdbcTemplate.batchUpdate(JdbcTemplate.java:936)
>
> at
>
> org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.batchUpdate(NamedParameterJdbcTemplate.java:366)
>
> at
>
> com.ecoenergy.cortix.sm.event.cache.SMIgniteCacheManager.updateObjectStates(SMIgniteCacheManager.java:118)
>
> at
>
> com.ecoenergy.cortix.sm.event.notifcator.SMIgniteNotificator.notify(SMIgniteNotificator.java:69)
>
> at
>
> com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.notify(ObjectEventHandler.java:100)
>
> at
>
> com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.receiveEvents(ObjectEventHandler.java:86)
>
> at
>
> com.ecoenergy.cortix.sm.event.consumer.ObjectEventConsumer.processObjectEvents(ObjectEventConsumer.java:60)
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Possible memory leak in Ignite 2.9

2020-11-03 Thread Ilya Kasnacheev
Hello!

800MB entry is far above of the entry size that we ever expected to see.
Even brief holding of these entries on heap will cause problems for you, as
well as sending them over communication.

I recommend splitting entries into chunks, maybe. That's what IGFS did
basically, we decided to ax it, but you can still use that approach.

Having said that, if you can check the heap dump to see where are these
stuck byte arrays referenced from, I may check it.

Regards,
-- 
Ilya Kasnacheev


вт, 3 нояб. 2020 г. в 11:48, Kalin Katev :

> Hi,
>
> not too long ago I tested apache ignite for my use case on Open JDK 11.
> The use case consists of writing cache entries with values going up to
> 800MB in size, the data itself being a simple string. After writing 5
> caches entries, 800 MB each, I noticed my Heap space exploding up to 11GB,
> while the entries themselves were written off-heap. The histogram of the
> heap dump shows that there are 5 tuples of byte[] arrays with size 800MB
> and 1000MB that are left dangling on heap.  I am very curious if I did
> something wrong or if there indeed is an issue in Ignite. All details can
> be seen on
> https://stackoverflow.com/questions/64550479/possible-memory-leak-in-apache-ignite
>
> Should I create a jira ticket for this issue?
>
> Best regards,
> Kalin Katev
>
> Resonance GmbH
>


Re: 2.8.1 : INFO org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi [] - Accepted incoming communication connection

2020-11-03 Thread Ilya Kasnacheev
Hello!

We have a lot of tests which do exactly that, and they don't seem to
exhibit that behavior. Please provide a reproducer.

Regards,
-- 
Ilya Kasnacheev


вт, 3 нояб. 2020 г. в 11:13, VeenaMithare :

> Hi Ilya,
>
> This is easy to reproduce. Have a server node and a client node in a
> cluster. Stop and start the client immediately so that the start happens
> within the failure detection timeout ( 10 sec typically ). You will see
> these messages in the client log as it is starting up the second time.
>
> Let me know if you still need a reproducer.
>
> regards,
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Client Node Not Responding - Hung

2020-11-02 Thread Ilya Kasnacheev
Hello!

What I have noticed in your logs is a huge number of network errors:
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed
to connect to address [addr=
vrnv02ax04705.INT.CARLSONWAGONLIT.COM/10.212.120.67:51605, err=Failed to
read remote n
ode recovery handshake (connection closed).]^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3462)
^M
   ... 14 more^M
   Caused by: class
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$HandshakeException:
Failed to read remote node recovery handshake (connection closed).^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.safeTcpHandshake(TcpCommunicationSpi.java:3803)
^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3313)
^M
   ... 14 more^M
   Suppressed: class org.apache.ignite.IgniteCheckedException: Failed
to connect to address [addr=
vrnv02ax04705.INT.CARLSONWAGONLIT.COM/10.212.120.67:51605, err=Failed to
read remote n
ode recovery handshake (connection closed).]^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3462)
^M
   ... 14 more^M
   Caused by: class
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$HandshakeException:
Failed to read remote node recovery handshake (connection closed).^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.safeTcpHandshake(TcpCommunicationSpi.java:3803)
^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3313)
^M
   ... 14 more^M
   Suppressed: class org.apache.ignite.IgniteCheckedException: Failed
to connect to address [addr=
vrnv02ax04705.INT.CARLSONWAGONLIT.COM/10.212.120.67:51605, err=Failed to
read remote n
ode recovery handshake (connection closed).]^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3462)
^M
   ... 14 more^M
   Caused by: class
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$HandshakeException:
Failed to read remote node recovery handshake (connection closed).^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.safeTcpHandshake(TcpCommunicationSpi.java:3803)
^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3313)
^M
   ... 14 more^M
   Suppressed: class org.apache.ignite.IgniteCheckedException: Failed
to connect to address [addr=
vrnv02ax04705.INT.CARLSONWAGONLIT.COM/10.212.120.67:51605, err=Failed to
read remote n
ode recovery handshake (connection closed).]^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3462)
^M
   ... 14 more^M
   Caused by: class
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$HandshakeException:
Failed to read remote node recovery handshake (connection closed).^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.safeTcpHandshake(TcpCommunicationSpi.java:3803)
^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3313)
^M
   ... 14 more^M
   Suppressed: class org.apache.ignite.IgniteCheckedException: Failed
to connect to address [addr=
vrnv02ax04705.INT.CARLSONWAGONLIT.COM/10.212.120.67:51605, err=Failed to
read remote n
ode recovery handshake (connection closed).]^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3462)
^M
   ... 14 more^M
   Caused by: class
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$HandshakeException:
Failed to read remote node recovery handshake (connection closed).^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.safeTcpHandshake(TcpCommunicationSpi.java:3803)
^M
   at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3313)
^M
   ... 14 more^M


Have you tried making sure that the nodes are in fact connectable by
Communication?

Regards,
-- 
Ilya Kasnacheev


пн, 2 нояб. 2020 г. в 13:17, Ravi Makwana :

> Hi,
>
> We are using Apache Ignite 2.7.0 binary and servers are using Linux OS &
> app servers are using Windows OS.We are using Apache Ignite .Net APIs.
>
> Recently we have noticed that our application is not responding and all
> the calls  failed.
>
> App server has 32 GB RAM & We are specifying JVM Heap = 8 GB
>
> I am sharing a client log and could you please suggest why the client node
> is not responding & what is the possible c

Re: Ignite 2.9 one way client to server communication

2020-11-02 Thread Ilya Kasnacheev
Hello!

I don't think this is the issue of configuration, rather than how many
nodes you are having and the length of your transactions/operations.

It's all about PME length.

Regards,
-- 
Ilya Kasnacheev


пн, 2 нояб. 2020 г. в 16:05, Hemambara :

> Thank you Stephen and Ilya for your response.
>
> Please find my server and client config and let me know if I a missing
> anything which is causing these delays. We have 8 jvm server nodes running
> on Ignite 2.8.0 with 4gig each and each node is on separate host. We have
> 60
> client nodes (Ignite 2.8.0) using this same client configuration. We are
> not
> using persistence, it is purely in-memory. Both server and clients are in
> same data center. Initially first few clients are able to connect in 1-2
> minutes, but as it grows, last clients are taking 5-6 minutes. We really
> want to reduce the connectivity time so that we can use map listener. Right
> now, it became a road blocker to use any other thick client
> functionalities.
>
> Can you please clarify below queries:
> 1) If client is not part of ring, any reason why it has to trigger
> PME ?
> 2) If it just connecting to one server node like thin client, does
> it
> transfer/wait for any additional events before it successfully establish
> connection, which is causing delays ?
> 3) Does a client node wait until all the other server and client
> nodes get notified that it joined ?
> 4) I am defining cache on client config as well? I think it is not
> required. Will it cause any delays during connectivity ?
> 5) Any plan / any other way to get map listener functionality in
> thin
> client (other than continuous query) ?
>
>
> Server config:
> ---
>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>  class="org.apache.ignite.configuration.ClientConnectorConfiguration">
> "/>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.binary.BinaryTypeConfiguration">
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>
>  value=#Myport>"/>
> 
> 
> 
> 
> 
> "/>
> 
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
> localhost
> host1:port#
> host2:port#
> host3:port#
> host4:port#
> host5:port#
> host6:port#
> host7:port#
> host8:port#
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
> 
> 
> 
>
> 
>
>
>
> Client config
> -
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.BinaryConfiguration">
> 
> 
> 
>  class="org.apache.ignite.binary.BinaryTypeConfiguration">
>  value="a.b.c.MyClass"/>
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>  value=MyPort#>"/>
> 
> 
> 
> 
> 
> "/>
> 
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
> localhost
> host1:port#
> host2:port#
> host3:port#
> host4:port#
> host5:port#
> host6:port#
> host7:port#
> host8:port#
> 
> 
> 
> 
> 
> 
> 
>
>
>
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
> 
> 
> 
>
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Client App Object Allocation Rate

2020-11-02 Thread Ilya Kasnacheev
Hello!

Okay, that's not very cool. I hope to get some response from development
side at this point.

Sans reaction, I will file a ticket.

Regards,
-- 
Ilya Kasnacheev


пн, 2 нояб. 2020 г. в 15:03, ssansoy :

> Apologies I may have spoken to soon (I was looking at the wrong process).
>
> It looks like we can't turn EVT_NODE_METRICS_UPDATED off as it is
> designated
> as an internal event GridEventStorageManager.disableEvents line 441 (ignite
> 2.8.1), this checks to see if the event that is being disabled is part of
> EVTS_DISCOVERY_ALL, which it is, so this isn't set to false...
>
> Are there any workarounds to this?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Workaround for getting ContinuousQuery to support transactions

2020-11-02 Thread Ilya Kasnacheev
Hello!

You may actually use our data streamer (with allowOverwrite false).

Once you call flush() on it and it returns, you should be confident that
all 10,000 entries are readable from cache. Of course it has to be 1 cache.

Regards,
-- 
Ilya Kasnacheev


вт, 27 окт. 2020 г. в 13:18, ssansoy :

> Hi thanks for the reply. Appreciate the suggestion - and if creating a new
> solution around this, we would likely take that tact. Unfortunately the
> entire platform we are looking to migrate over to ignite has dependencies
> in
> places for updates to come in as a complete batch (e.g. whatever was in an
> update transaction).
>
> We've experimented with putting a queue in the client as you say, with a
> timeout which gathers all sequentially arriving updates from the continuous
> query and grouping them together after a timeout of e.g. 50ms. However this
> is quite fragile/timing sensitive and not something we can comfortably put
> into production.
>
> Are there any locking or signaling mechanisms (or anything else really)
> that
> might help us here? E.g. we buffer the updates in the client, and await
> some
> signal that the updates are complete. This signal would need to be fired
> after the continuous query has seen all the updates. E.g. the writer will:
>
> Write 10,000 records to the cache
> Notify something
>
> The client app will:
> Receive 10,000 updates, 1 at a time, queueing them up locally
> Upon that notification, drain this queue and process the records.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite 2.9 one way client to server communication

2020-11-02 Thread Ilya Kasnacheev
Hello!

No, it is not supposed to reduce the time taken. Instead, it is solving the
problem that client may be behind NAT and not reachable from outside. Or
server nodes may be inside K8S with similar results.

Thick client will still take time to connect since it triggers PME. Clients
were never a part of ring (unless forceServerMode is set) - they only talk
via discovery to one server node and only switch if it becomes unavailable

Regards,
-- 
Ilya Kasnacheev


сб, 31 окт. 2020 г. в 08:03, Hemambara :

> I see that ignite 2.9 has added support for one-way thick-client to server
> connections. Does it reduce the time taken to connect thick client to
> server
> and provides all thick client functionalities? Does client still be in
> ring?
> Right now we r facing issues with thick client where it is taking more time
> to connect especially when we have 60 clients. Switched to thin clients for
> now. But we need map listeners. Does upgrading to 2.9 helps reducing long
> connection times? Also is there any plan to provide map listeners on thin
> clients?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteSpiOperationTimeoutException: Operation timed out [timeoutStrategy= ExponentialBackoffTimeoutStrategy

2020-10-30 Thread Ilya Kasnacheev
Hello!

Do you have a reproducer for this behaviour that I could run and see it
failing?

Regards,
-- 
Ilya Kasnacheev


вт, 27 окт. 2020 г. в 22:02, VeenaMithare :

> Hi Ilya, The node communication issue is because one of the node is being
> restarted - and not due to network failure . The original issue is as below
> : Our setup : Servers - 3 node cluster Reader clients : wait for an update
> on an entry of a cache ( around 20 of them ) Writer Client : 1 If one of
> the reader client restarts while the writer is writing into the entry of
> the cache , the server attempts to send the update to the failed client's
> local listener . It keeps attempting to communicate with the failed client
> ( client's continous query local listener ? ) till it timesout as per
> connTimeoutStrategy=ExponentialBackoffTimeoutStrategy . ( Please find the
> snippet of the exception below. The complete log is attached as an
> attachment ) This delays the completion of the transaction that was started
> by the writer client. Is there any way the writer client could complete the
> transaction without getting impacted by the reader client restarts ?
> regards, Veena.
> --
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>


Re: Client App Object Allocation Rate

2020-10-30 Thread Ilya Kasnacheev
Hello!

I guess that you have EVT_NODE_METRICS_UPDATED event enabled on client
nodes (but maybe not on server nodes)

It will indeed produce a lot of garbage so I recommend disabling the
recording of this event by calling
ignite.events().disableLocal(EVT_NODE_METRICS_UPDATED);

+ dev@

Why do we record EVT_NODE_METRICS_UPDATED by default? Sounds like a bad
idea yet we enable recording of all internal events in
GridEventStorageManager.

-- 
Ilya Kasnacheev


пн, 26 окт. 2020 г. в 19:37, ssansoy :

> Hi, here's an example (using YourKit rather than JFR).
>
> Apologies, I had to obfuscate some of the company specific information.
> This
> shows a window of about 10 seconds of allocations
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2797/MetricsUpdated.png>
>
>
> Looks like these come from GridDiscoveryManager - creating a new string
> every time. This happens several times per second it seems. Some of these
> mention other client nodes - so some other production app in our firm, that
> uses the cluster, has an impact on a different production app. Is there any
> way to turn this off? Each of our clients need to be isolated such that
> other client apps do not interfere in any way
>
> Also
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2797/TcpDiscoveryClientMetricsUpdateMessage.png>
>
>
> These update messages seem to come in even though metricsEnabled is turned
> off on the client (not specificied).
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Execution of local SqlFieldsQuery on client node disallowed

2020-10-26 Thread Ilya Kasnacheev
Hello!

You are using an Ignite Thick Client driver. As its name implies, it will
start a local client node and then connect to it, without the option of
doing local queries.

You need to use Ignite Thin JDBC driver: jdbc:ignite:thin://
Then you can do local queries.

Regards,
-- 
Ilya Kasnacheev


сб, 24 окт. 2020 г. в 16:04, narges saleh :

> Hello Ilya
> Yes, it happens all the time. It seems ignite forces the "client"
> establishing the jdbc connection into a client mode, even if I set
> client=false.  The sample code and config are attached. The question is how
> do I force JDBC connections from a server node.
> thanks.
>
> On Fri, Oct 23, 2020 at 10:31 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Does this happen every time? If so, do you have a reproducer for the
>> issue?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 23 окт. 2020 г. в 13:06, narges saleh :
>>
>>> Denis -- Just checked. I do specify my services to be deployed on server
>>> nodes only. Why would ignite think that I am running my code on a client
>>> node?
>>>
>>> On Fri, Oct 23, 2020 at 3:50 AM narges saleh 
>>> wrote:
>>>
>>>> Hi Denis
>>>> What would make an ignite node a client node? The code is invoked via
>>>> an ignite service deployed on each node and I am not setting the client
>>>> mode anywhere. The code sets the jdbc connection to local and tries to
>>>> execute a sql code on the node in some interval. By the way, I didn't know
>>>> one could deploy a service on client nodes. Do I need to explicitly mark a
>>>> node as a server node when deploying a service?
>>>> thanks
>>>>
>>>> On Thu, Oct 22, 2020 at 9:42 PM Denis Magda  wrote:
>>>>
>>>>> The error message says you're attempting to run the query on a client
>>>>> node. If that's the case (if the service is deployed on the client node),
>>>>> then the local flag has no effect because client nodes don't keep your 
>>>>> data
>>>>> locally but rather consume it from servers.
>>>>>
>>>>> -
>>>>> Denis
>>>>>
>>>>>
>>>>> On Thu, Oct 22, 2020 at 6:26 PM narges saleh 
>>>>> wrote:
>>>>>
>>>>>> Hi All,
>>>>>> I am trying to execute a sql query via a JDBC  connection on the
>>>>>> service node (the query is run via a service), but I am getting 
>>>>>> *Execution
>>>>>> of local SqlFieldsQuery on client node disallowed.*
>>>>>> *The JDBC connection has the option local=true as I want to run the
>>>>>> query on the data on the local node only.*
>>>>>> *Any idea why I am getting this error?*
>>>>>>
>>>>>> *thanks.*
>>>>>>
>>>>>


Re: alter table logging slow

2020-10-26 Thread Ilya Kasnacheev
Hello!

Usually it is I/O bound, meaning, you can only speed it up by having more,
faster disks.

However, in some cases, tuning persistence settings such as checkpoint
frequency, checkpoint page buffer size and checkpoint thread pool size may
help. You can also try the direct-io module, but you may see mixed results.

Regards,
-- 
Ilya Kasnacheev


пн, 26 окт. 2020 г. в 16:00, Matteo Durighetto :

> Hello Ilya ,
>thank you for your answer, it's make sense. I see
> the class called it's changewal and as you wrote, it's probably call a
> flush of the cache .
> Is it possible to speed up the flush to disk with more threads ? I think
> they are controlled by system pool or is it another parameter?
>
> Kind regards
>
> Matteo Durighetto
>
>
>
>
>
> Il giorno lun 26 ott 2020 alle ore 13:05 Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> ha scritto:
>
>> Hello!
>>
>> I guess it has to write all the data to disk. After 'alter table logging'
>> returns, you are guaranteed consistency on this table, meaning all of its
>> pages have to be persisted to disk. Obviously, it may take a lot of time if
>> you have many gigabytes to flush.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 26 окт. 2020 г. в 11:59, Matteo Durighetto :
>>
>>> Hello,
>>>I found an expected behaviour on apache ignite.
>>> If you do ( TABLE partitioned with 1 backup and ATOMIC mode ):
>>>
>>> alter table ..  nologging;
>>> set stream on ;
>>> .. loading a lot of data with jdbc ..
>>> .. close connection to flush data ..
>>> .. reopen connection..
>>> alter table .. logging
>>>
>>> The last step "alter table logging" writes a lot of data and it's
>>> "slow", if we load the table
>>> with 26 threads in about 10 minute and every thread uses a dedicated
>>> partition, , the nologging phase is around 11 minutes.
>>>
>>> I would like to understand what it is doing and how to speed up the
>>> process, I try to understand from SqlAlterTableCommand.java what it's
>>> doing, but it's not straightforward.
>>>
>>> Kind Regards
>>>
>>> Matteo Durighetto
>>>
>>>
>>>
>>>
>>> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
>>>
>>> M I R I A D E - P L A Y  T H E  C H A N G E
>>>
>>> Via Castelletto 11, 36016 Thiene VI
>>> Tel. 0445030111 - Fax 0445030100
>>> Website: http://www.miriade.it/
>>>
>>> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
>>> .
>>> *Clausola di riservatezza. *Le informazioni contenute o allegate al 
>>> presente messaggio sono dirette unicamente al Destinatario sopra indicato. 
>>> In caso di ricezione da parte di persona diversa è vietato qualunque tipo 
>>> di distribuzione o copia. Chiunque riceva questa comunicazione per errore è 
>>> tenuto ad informare immediatamente il mittente e a distruggere il 
>>> messaggio. Si informa che un eventuale trattamento di dati personali 
>>> contenuti nella presente comunicazione, in assenza dei presupposti di 
>>> liceità previsti dall’art 6 Reg. 679/16 (GDPR), è punito secondo la 
>>> normativa vigente in materia. Si informa che la casella di posta 
>>> elettronica è di titolarità di Miriade S.r.l. è concessa ai propri 
>>> dipendenti e collaboratori esclusivamente per rendere la prestazione 
>>> lavorativa.
>>>
>>> *Confidentiality notice.* This email and any files transmitted with it are 
>>> confidential and intended solely for the use of the individual or entity to 
>>> whom they are addressed. This message contains confidential information and 
>>> is intended only for the individual named. If you are not the named 
>>> addressee you should not disseminate, distribute or copy this e-mail. 
>>> Please notify the sender immediately by e-mail if you have received this 
>>> e-mail by mistake and delete this e-mail from your system. Any form of data 
>>> processing in absence of the lawfulness principles provided by the art. 6 
>>> of Reg. UE 697/16 (GDPR) is prohibited and it will be punishable by law. 
>>> This e-mail address is property of Miriade S.r.l. and it is used by its 
>>> employees solely for working.
>>>
>>>
> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
>
> M I R I A D E - P L A Y  T H E  C H A N G E
>
> Via Castelletto 11, 36016 Thiene VI
> Tel. 0

Re: Client App Object Allocation Rate

2020-10-26 Thread Ilya Kasnacheev
Hello!

Can you please run your app with JFR configured to record object
allocation, to see where it actually happens, and share some results?

Thanks,
-- 
Ilya Kasnacheev


пт, 23 окт. 2020 г. в 17:40, ssansoy :

> This doesn't seem to help unfortunately.
> Re-examining the allocation stats, it seems the app is actually allocating
> around 1.5mb per second with ignite (vs only 0.15mb per second without
> ignite in the app).
> I've read about past issues with IGNITE_EXCHANGE_HISTORY_SIZE causing a lot
> of allocations, but thought this had been fixed prior to 2.8 (we are on
> 2.8.1).
>
> Is there anything else we can tweak around this? Cache metrics etc are off
> on the server and client config. What kind of other objects might be being
> created at this rate?
>
> We can change the GC settings etc which we have done appropriately for our
> app, but we'd like to understand what is being created and why rather than
> change our GC settings to work around this.
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: alter table logging slow

2020-10-26 Thread Ilya Kasnacheev
Hello!

I guess it has to write all the data to disk. After 'alter table logging'
returns, you are guaranteed consistency on this table, meaning all of its
pages have to be persisted to disk. Obviously, it may take a lot of time if
you have many gigabytes to flush.

Regards,
-- 
Ilya Kasnacheev


пн, 26 окт. 2020 г. в 11:59, Matteo Durighetto :

> Hello,
>I found an expected behaviour on apache ignite.
> If you do ( TABLE partitioned with 1 backup and ATOMIC mode ):
>
> alter table ..  nologging;
> set stream on ;
> .. loading a lot of data with jdbc ..
> .. close connection to flush data ..
> .. reopen connection..
> alter table .. logging
>
> The last step "alter table logging" writes a lot of data and it's "slow",
> if we load the table
> with 26 threads in about 10 minute and every thread uses a dedicated
> partition, , the nologging phase is around 11 minutes.
>
> I would like to understand what it is doing and how to speed up the
> process, I try to understand from SqlAlterTableCommand.java what it's
> doing, but it's not straightforward.
>
> Kind Regards
>
> Matteo Durighetto
>
>
>
>
> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
>
> M I R I A D E - P L A Y  T H E  C H A N G E
>
> Via Castelletto 11, 36016 Thiene VI
> Tel. 0445030111 - Fax 0445030100
> Website: http://www.miriade.it/
>
> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
> *Clausola di riservatezza. *Le informazioni contenute o allegate al presente 
> messaggio sono dirette unicamente al Destinatario sopra indicato. In caso di 
> ricezione da parte di persona diversa è vietato qualunque tipo di 
> distribuzione o copia. Chiunque riceva questa comunicazione per errore è 
> tenuto ad informare immediatamente il mittente e a distruggere il messaggio. 
> Si informa che un eventuale trattamento di dati personali contenuti nella 
> presente comunicazione, in assenza dei presupposti di liceità previsti 
> dall’art 6 Reg. 679/16 (GDPR), è punito secondo la normativa vigente in 
> materia. Si informa che la casella di posta elettronica è di titolarità di 
> Miriade S.r.l. è concessa ai propri dipendenti e collaboratori esclusivamente 
> per rendere la prestazione lavorativa.
>
> *Confidentiality notice.* This email and any files transmitted with it are 
> confidential and intended solely for the use of the individual or entity to 
> whom they are addressed. This message contains confidential information and 
> is intended only for the individual named. If you are not the named addressee 
> you should not disseminate, distribute or copy this e-mail. Please notify the 
> sender immediately by e-mail if you have received this e-mail by mistake and 
> delete this e-mail from your system. Any form of data processing in absence 
> of the lawfulness principles provided by the art. 6 of Reg. UE 697/16 (GDPR) 
> is prohibited and it will be punishable by law. This e-mail address is 
> property of Miriade S.r.l. and it is used by its employees solely for working.
>
>


Re: IgniteSpiOperationTimeoutException: Operation timed out [timeoutStrategy= ExponentialBackoffTimeoutStrategy

2020-10-23 Thread Ilya Kasnacheev
Hello!

Looks like a network timeout, probably caused by firewall between two
nodes, imparting their communication.

You can try updating to 2.9 and enabling communication via discovery.

Regards,
-- 
Ilya Kasnacheev


чт, 8 окт. 2020 г. в 18:17, VeenaMithare :

> Hi ,
>
> Our setup :
> Servers - 3 node cluster
> Reader clients : wait for an update on an entry of a cache ( around 20 of
> them )
> Writer Client : 1
>
> If one of the reader client restarts while the writer is writing into the
> entry of the cache , the server attempts to send the update to the failed
> client's local listener . It keeps attempting to communicate with the
> failed
> client ( client's continous query local listener ? ) till it timesout as
> per
> connTimeoutStrategy=ExponentialBackoffTimeoutStrategy . ( Please find the
> snippet of the exception below. The complete log is attached as an
> attachment ) This delays the completion of the transaction that was started
> by the writer client.
>
> Is there any way the writer client could complete the transaction without
> getting impacted by the reader client restarts ?
>
>
>
>
>
> 2020-10-08 14:35:21,465 [sys-stripe-26-#27] WARN
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi [] - Handshake
> timed out (will stop attempts to perform the handshake)
> [node=3311a67b-bfcb-41af-8c09-b2e8f2fbde9b,
> connTimeoutStrategy=ExponentialBackoffTimeoutStrategy [maxTimeout=60,
> totalTimeout=3, startNanos=223772180706400, currTimeout=60],
> err=Operation timed out [timeoutStrategy= ExponentialBackoffTimeoutStrategy
> [maxTimeout=60, totalTimeout=3, startNanos=223772180706400,
> currTimeout=60]], addr=MACHINENAME.COMPANY.LOCAL/1.2.3.4:47103,
> failureDetectionTimeoutEnabled=true, timeout=0]
> 2020-10-08 14:35:21,465 [sys-stripe-26-#27] ERROR
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi [] - Failed to
> send message to remote node [node=TcpDiscoveryNode
> [id=3311a67b-bfcb-41af-8c09-b2e8f2fbde9b,
> consistentId=3311a67b-bfcb-41af-8c09-b2e8f2fbde9b, addrs=ArrayList
> [0:0:0:0:0:0:0:1, 1.2.3.4, 127.0.0.1], sockAddrs=HashSet
> [MACHINENAME.COMPANY.LOCAL/1.2.3.4:0, /0:0:0:0:0:0:0:1:0, /127.0.0.1:0],
> discPort=0, order=12, intOrder=8, lastExchangeTime=1602163619453,
> loc=false,
> ver=2.8.1#20200521-sha1:86422096, isClient=true], msg=GridIoMessage [plc=2,
> topic=T4 [topic=TOPIC_CACHE, id1=94370aa1-e970-37ae-9471-fd583d923522,
> id2=3311a67b-bfcb-41af-8c09-b2e8f2fbde9b, id3=0], topicOrd=-1,
> ordered=true,
> timeout=0, skipOnTimeout=true, msg=GridContinuousMessage
> [type=MSG_EVT_NOTIFICATION, routineId=7dac3ff4-3460-4dc4-8324-a1ebe4561854,
> data=null, futId=null]]]
> org.apache.ignite.IgniteCheckedException: Failed to connect to node (is
> node
> still alive?). Make sure that each ComputeTask and cache Transaction has a
> timeout set in order to prevent parties from waiting forever in case of
> network issues [nodeId=3311a67b-bfcb-41af-8c09-b2e8f2fbde9b,
> addrs=[/127.0.0.1:47103, /0:0:0:0:0:0:0:1:47103,
> MACHINENAME.COMPANY.LOCAL/1.2.3.4:47103]]
> at
>
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioSession(TcpCommunicationSpi.java:3698)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3458)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createCommunicationClient(TcpCommunicationSpi.java:3198)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:3078)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2918)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2877)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:2035)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.sendOrderedMessage(GridIoManager.java:2280)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1963)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1934)
> ~[ignite-core-2.8.1.jar:2.8.1]
> at
>
> org.apache.ignite.

Re: Isolating server nodes to a fixed virtual IP or interface

2020-10-23 Thread Ilya Kasnacheev
Hello!

Have you also tried setting IgniteConfiguration.localHost property?

Regards,
-- 
Ilya Kasnacheev


чт, 22 окт. 2020 г. в 20:54, Gilles :

> Hello,
>
> I'm currently moving a project from development stage to production. The
> aim is that my cluster server nodes are running on multiple virtual private
> servers, inside a VPN (10.0.0.0/24).
>
> But how do I make sure that I lock any communication of a node to either a
> specific network interface, or a static virtual IP (eg 10.0.0.3)?
>
> Some googling got me to this answer from old documentation.
>
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>   
> 
>  
>
> However the nodes are still accessible on their public IP addresses. So
> the question is, what is the correct way to isolate them from the public?
>
> I will be using a software firewall on these servers too, but I like to
> have the peace of mind from the extra layer of security.
>
>
> Thanks in advance,
> Gilles
>
> And to the creators, maintainers and contributors, thank you so much for
> this great piece of software! Never had so much fun doing "cumbersome"
> database work.
>
>
>
>


Re: Execution of local SqlFieldsQuery on client node disallowed

2020-10-23 Thread Ilya Kasnacheev
Hello!

Does this happen every time? If so, do you have a reproducer for the issue?

Regards,
-- 
Ilya Kasnacheev


пт, 23 окт. 2020 г. в 13:06, narges saleh :

> Denis -- Just checked. I do specify my services to be deployed on server
> nodes only. Why would ignite think that I am running my code on a client
> node?
>
> On Fri, Oct 23, 2020 at 3:50 AM narges saleh  wrote:
>
>> Hi Denis
>> What would make an ignite node a client node? The code is invoked via an
>> ignite service deployed on each node and I am not setting the client mode
>> anywhere. The code sets the jdbc connection to local and tries to execute a
>> sql code on the node in some interval. By the way, I didn't know one could
>> deploy a service on client nodes. Do I need to explicitly mark a node as a
>> server node when deploying a service?
>> thanks
>>
>> On Thu, Oct 22, 2020 at 9:42 PM Denis Magda  wrote:
>>
>>> The error message says you're attempting to run the query on a client
>>> node. If that's the case (if the service is deployed on the client node),
>>> then the local flag has no effect because client nodes don't keep your data
>>> locally but rather consume it from servers.
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Thu, Oct 22, 2020 at 6:26 PM narges saleh 
>>> wrote:
>>>
>>>> Hi All,
>>>> I am trying to execute a sql query via a JDBC  connection on the
>>>> service node (the query is run via a service), but I am getting *Execution
>>>> of local SqlFieldsQuery on client node disallowed.*
>>>> *The JDBC connection has the option local=true as I want to run the
>>>> query on the data on the local node only.*
>>>> *Any idea why I am getting this error?*
>>>>
>>>> *thanks.*
>>>>
>>>


Re: Workaround for getting ContinuousQuery to support transactions

2020-10-23 Thread Ilya Kasnacheev
Hello!

Nothing obviously wrong but it still seems to me that you're stretching it
too far.

Maybe you need to rethink your cache entry granularity (put more stuff in
the entry to make it self-contained) or use some kind of message queue.

Regards,
-- 
Ilya Kasnacheev


вт, 20 окт. 2020 г. в 19:37, ssansoy :

> Following on from:
>
> http://apache-ignite-users.70518.x6.nabble.com/ContinuousQuery-Batch-updates-td34198.html
>
> The takeaway from there is that the continuous queries do not honour
> transactions, so if a writer writes 100 records (e.g.
> CalculationParameters)
> in a transaction, the continuous query will see the updates before the
> entire batch of 100 has been committed.
> This is a show stopper issue for us for using ignite unfortunately so we
> are
> trying to think of some work arounds.
>
> We are considering updating the writer app so, when writing the 100
> CalculationParameters records, it:
>
> 1. Writes the 100 CalculationParameter records to the cluster (e.g. with
> some incremented version e.g. 2)
> 2. It writes a separate entry into a special "Modifications" cache, with
> the
> number of rows written (numRows), and the version id of those records
> (versionId).
>
> Client apps don't subscribe to the CalculationParameter cache. Instead apps
> subscribe to the Modifications cache.
>
> The remote filter will:
>
> 1. Do a cache.get or a scan query on all the records in
> CalculationParameter, filtering on versionId=2. The query has to keep
> repeating until all rows are visible (e.g. the numRows records are seen).
> Then these records are subjected to another filter (e.g. the usual filter
> criteria the the client app would have applied to CalculationParameter). If
> there are any records, then the filter returns true.
>
> 2. The transformer does a similar process to the above, and groups all the
> numRows records into a collection, and returns this collection to the
> localListen in the client. The client then has access to all the records it
> is interested in, in one batch.
>
> We would have liked to avoid waiting/retrying for all the
> CalculationParameter records to appear after the Modifications update is
> seen, but since this is happening on the cluster and not on the client we
> can probably live with it.
>
> Do any ignite developers in here see any fundamental flaws with this
> approach? We ultimately just want the localListen to be called with a
> collection of records that were all updated at the same time - so if there
> are any other things worth trying please shout.
> Thanks!
> Sham
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Cluster got frozen/unreponsive

2020-10-23 Thread Ilya Kasnacheev
Hello!

This looks like a transactional deadlock. Is this operation a part of
transaction? Are you locking keys in the same order?

Regards,
-- 
Ilya Kasnacheev


вс, 18 окт. 2020 г. в 09:35, Kamlesh Joshi :

> Hi Igniters,
>
>
>
> Our PROD cluster was frozen due to  deadlock as per below logs. Couldn't
> find further info on this, whether is it due to putall/removeall ? for
> which cache and keys or is it due to some other issue ?
>
> We are using putall and removeall but not doing any ordering while using
> it.
>
>
>
> Could you please check logs, tell us if you find any reason for this
> deadlock. If it is due to putall/removeall, how to get info for which cache
> and keys. Please find cluster configuration below,
>
>
>
> *##IgniteConfiguration###*
>
> **
>
> * value="${IGNITE_INSTANCE_NAME}" />*
>
> **
>
> **
>
> **
>
> **
>
> **
>
>
>
>
>
> *### Cache Configuration *
>
> **
>
> **
>
> **
>
> **
>
> **
>
>
>
> *Below are logs,*
>
>
>
> *[2020-10-17T16:12:17,158][INFO
> ][db-checkpoint-thread-#385%EDIFCustomerCC%][GridCacheDatabaseSharedManager]
> Checkpoint started [checkpointId=53afef47-ff6e-494d-a448-e35cc45b4eff,
> startPtr=FileWALPointer [idx=1928027, fileOff=3575743, len=63453],
> checkpointLockWait=0ms, checkpointLockHoldTime=33ms,
> walCpRecordFsyncDuration=5ms, pages=76968, reason='timeout']*
>
> *[2020-10-17T16:12:17,224][INFO
> ][wal-file-archiver%EDIFCustomerCC-#360%EDIFCustomerCC%][FileWriteAheadLogManager]
> Copied file
> [src=/datastore1/wal/node00-34dda35b-606f-4124-936e-7b1a01531043/0006.wal,
> dst=/datastore1/archive/node00-34dda35b-606f-4124-936e-7b1a01531043/01928026.wal]*
>
> *[2020-10-17T16:12:18,318][INFO
> ][db-checkpoint-thread-#385%EDIFCustomerCC%][GridCacheDatabaseSharedManager]
> Checkpoint finished [cpId=53afef47-ff6e-494d-a448-e35cc45b4eff,
> pages=76968, markPos=FileWALPointer [idx=1928027, fileOff=3575743,
> len=63453], walSegmentsCleared=5, walSegmentsCovered=[1928022 - 1928026],
> markDuration=67ms, pagesWrite=706ms, fsync=454ms, total=1227ms]*
>
> *[2020-10-17T16:12:48,251][INFO
> ][wal-file-archiver%EDIFCustomerCC-#360%EDIFCustomerCC%][FileWriteAheadLogManager]
> Starting to copy WAL segment [absIdx=1928027, segIdx=7,
> origFile=/datastore1/wal/node00-34dda35b-606f-4124-936e-7b1a01531043/0007.wal,
> dstFile=/datastore1/archive/node00-34dda35b-606f-4124-936e-7b1a01531043/01928027.wal]*
>
> *[2020-10-17T16:12:48,355][INFO
> ][wal-file-archiver%EDIFCustomerCC-#360%EDIFCustomerCC%][FileWriteAheadLogManager]
> Copied file
> [src=/datastore1/wal/node00-34dda35b-606f-4124-936e-7b1a01531043/0007.wal,
> dst=/datastore1/archive/node00-34dda35b-606f-4124-936e-7b1a01531043/01928027.wal]*
>
> *[2020-10-17T16:13:27,146][INFO
> ][wal-file-archiver%EDIFCustomerCC-#360%EDIFCustomerCC%][FileWriteAheadLogManager]
> Starting to copy WAL segment [absIdx=1928028, segIdx=8,
> origFile=/datastore1/wal/node00-34dda35b-606f-4124-936e-7b1a01531043/0008.wal,
> dstFile=/datastore1/archive/node00-34dda35b-606f-4124-936e-7b1a01531043/01928028.wal]*
>
> *[2020-10-17T16:13:27,265][INFO
> ][wal-file-archiver%EDIFCustomerCC-#360%EDIFCustomerCC%][FileWriteAheadLogManager]
> Copied file
> [src=/datastore1/wal/node00-34dda35b-606f-4124-936e-7b1a01531043/0008.wal,
> dst=/datastore1/archive/node00-34dda35b-606f-4124-936e-7b1a01531043/01928028.wal]*
>
> *[2020-10-17T16:14:06,135][INFO
> ][wal-file-archiver%EDIFCustomerCC-#360%EDIFCustomerCC%][FileWriteAheadLogManager]
> Starting to copy WAL segment [absIdx=1928029, segIdx=9,
> origFile=/datastore1/wal/node00-34dda35b-606f-4124-936e-7b1a01531043/0009.wal,
> dstFile=/datastore1/archive/node00-34dda35b-606f-4124-936e-7b1a01531043/01928029.wal]*
>
> *[2020-10-17T16:14:06,250][INFO
> ][wal-file-archiver%EDIFCustomerCC-#360%EDIFCustomerCC%][FileWriteAheadLogManager]
> Copied file
> [src=/datastore1/wal/node00-34dda35b-606f-4124-936e-7b1a01531043/0009.wal,
> dst=/datastore1/archive/node00-34dda35b-606f-4124-936e-7b1a01531043/01928029.wal]*
>
> *[2020-10-17T16:15:07,128][INFO
> ][wal-file-archiver%EDIFCustomerCC-#360%EDIFCustomerCC%][FileWriteAheadLogManager]
> Starting to copy WAL segment [absIdx=1928030, segIdx=0,
> origFile=/datastore1/wal/node00-34dda35b-606f-4124-936e-7b1a01531043/.wal,
> dstFile=/datastore1/archive/node00-34dda35b-606f-4124-936e-7b1a01531043/

Re: Urgent - Using Paired Connections leading to Connection Refuse

2020-10-22 Thread Ilya Kasnacheev
Hello!

Even with unpaired connections you need communication ports available in
both directions, unless you enable communication via discovery feature.

Regards,
-- 
Ilya Kasnacheev


вт, 20 окт. 2020 г. в 21:31, zork :

> Hello,
>
> We are facing a critical issue with Ignite 2.8.
>
> We have two machines in same location running different ignite services.
> M1
> Ignite server (running on port 48500)
> Ignite client-1 (running on port 48100)
>
> M2
> Ignite client-2 (running on port 14050)
>
> Port opened between machines:
> M2 to M1 - 48500-48520, 48100-48200
> M1 to M2 - 14050
>
> Now, M2 is not able to connect to ignite cluster if I set
> usePairedConnections to true.
> If I have usePairedConnections to false in TCPCommunicationSPI then all
> works fine.
>
>
> Here is the related ignite config:
>
>   
>   
> 
> 
> 
> 
> 
>   
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
>   
> hkvaspsy-101:48500..48520
>   
> 
>   
> 
>   
> 
> 
>class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> 
> 
> 
> 
>   
> 
>
>
> Connection refused is noticed in the logs on ignite client-2.
>
> 15:04:06:443908|0327-01043:FFTracer_Ignite [class
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi]: {DEBUG} Action
> {Failed
> to join to address [addr=DELVM-M1/10.101.131.171:48500, recon=false,
> errs=[java.net.ConnectException: Connection refused (Connection
> refused)]]}
> Thread {tcp-client-disco-msg-worker-#21%WebCluster%}
> 15:04:06:443975|0296-01043:FFTracer_Ignite [class
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi]: {DEBUG} Action {Send
> join request [addr=DELVM-M1/10.101.131.171:48501, reconnect=false,
> locNodeId=de5d623c-2f6b-4c04-a589-e1812ed1e960]} Thread
> {tcp-client-disco-msg-worker-#21%WebCluster%}
> 15:04:06:444578|1556-01043:FFTracer_Ignite [class
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi]: {ERROR} Action
> {Exception on joining: Connection refused (Connection refused)} Thread
> {tcp-client-disco-msg-worker-#21%WebCluster%}
> java.net.ConnectException: Connection refused (Connection refused)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at
> java.net
> .AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at
> java.net
> .AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:607)
> at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1545)
> at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1507)
> at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl.sendJoinRequest(ClientImpl.java:708)
> at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl.joinTopology(ClientImpl.java:603)
> at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl.access$1100(ClientImpl.java:141)
> at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.tryJoin(ClientImpl.java:2027)
> at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1683)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
> org.apache.ignite.spi.discovery.tcp.ClientImpl$1.body(ClientImpl.java:302)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:61)
>
> Could you please tell why it is failing in case usePairedConnection is set
> to true?
> Do we also need 48100-48200 ports opened from M1 to M2 as well?
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ZookeeperClusterNode memory leak(potential)

2020-10-19 Thread Ilya Kasnacheev
Hello!

If you can reproduce this issue reliably, can you provide logs and heap
dumps from one of the runs?

Regards,
-- 
Ilya Kasnacheev


ср, 14 окт. 2020 г. в 12:37, 38797715 <38797...@qq.com>:

> Hi,
>
> 2.8.1
> 在 2020/10/14 下午4:06, Stephen Darlington 写道:
>
> What version of Ignite are you using?
>
> On 14 Oct 2020, at 08:36, 38797715 <38797...@qq.com> wrote:
>
> Hi team,
>
> About 50 nodes (including client nodes) were discovered by zookeeper.
>
> After running for about 2 days, you will find that the number of
> ZookeeperClusterNode instances increases significantly (about 1.6 million).
> After running for a longer time, it may cause memory overflow.
>
> <微信图片_20201014153308.png.png>
>
>
> <微信图片_20201014153354.png>
>
>
>
>


Re: Continuous query not transactional ?

2020-10-16 Thread Ilya Kasnacheev
Hello!

I'm not sure, but I would assume that changes are visible after commit(),
but you can see these changes in any order, and you can see cache a update
without cache b update, for example. This is for committed transactions.

For rolled back transactions, I don't know. I expect you won't be able to
see change as you have described, but won't bet on it.

Regards,
-- 
Ilya Kasnacheev


чт, 15 окт. 2020 г. в 20:35, VeenaMithare :

> Hi ,
>
> This is in continuation of the below statement on this post :
>
> http://apache-ignite-users.70518.x6.nabble.com/Lag-before-records-are-visible-after-transaction-commit-tp33787p33861.html
>
> >>Continuous Query itself is not transactional and it looks like it can't
> be
> used for this at the moment. So, it gets notification before other entries
> were committed.
>
> Does this mean we could get dirty reads as updates in continuous query ?
> i.e. for eg if the code is as below:
> 1. Start transaction
> 2. update records of cache a
> 3. update records of cache b
> 4. update records for cache c
> 5. commit
>
> if update of cache a succeeds , but update of cache b fails, will the local
> listener for continuous query for 'cache a' get an update ?
>
> regards,
> Veena.
>
>
> regards
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ContinuousQuery Batch updates

2020-10-16 Thread Ilya Kasnacheev
Hello!

Then you need to implement your own AffinityFunction by subclassing
RendezvousAffinityFunction.

Regards,
-- 
Ilya Kasnacheev


вт, 13 окт. 2020 г. в 13:15, ssansoy :

> Hi,
> RE: the custom affinity function, this is what we have:
>
> public class CacheLevelAffinityKeyMapper implements AffinityKeyMapper {
>
> private final Logger LOGGER =
> LoggerFactory.getLogger(CacheLevelAffinityKeyMapper.class);
> @Override
> public Object affinityKey(Object key) {
> if(key instanceof BinaryObject){
> BinaryObject binaryObjectKey = (BinaryObject) key;
> BinaryType binaryType = binaryObjectKey.type();
> LOGGER.trace("Key is {}, binary type is {}", key,
> binaryType.typeName());
> return binaryType.typeName();
> }
> else{
> LOGGER.trace("Key is {}, type is {}", key, key.getClass());
> return key;
> }
> }
>
> The issue was that the interface AffinityKeyMapper is depricated in Ignite
> 2.8.1. Is this the way you would recommend supplying such a custom
> function?
> We can't use the @AffinityKeyMapped annotation because there is no java
> type
> to annotate as such (we use BinaryObjects only)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Different cache expiry policy each node..

2020-10-15 Thread Ilya Kasnacheev
Hello!

You can also use cache.withExpiryPolicy() to adjust that on the fly since
it's really a property of data.

Regards,
-- 
Ilya Kasnacheev


ср, 14 окт. 2020 г. в 05:19, kay :

> Hello, I have 4 server nodes.
>
> and each node have a cache with 4minutes Modified expirypolicy..
>
> I restarted 1 node for change expirypolicy to 10minutes..
>
> Is it apply to expiryPolicy for other node?? or I have to start other 3
> nodes also??
>
> Thank you so much
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite instance fails. Unsure as to root cause

2020-10-15 Thread Ilya Kasnacheev
Hello!

Oct 12 17:49:41 nalrcsvridbq02 kernel: watchdog: BUG: soft lockup - CPU#0
stuck for 38s! [kworker/u256:0:3703404]

This is bad. Your system kernel says your CPU#0 was hanging for 38 seconds.

This is enough to trigger failure detection timeout and kill and instance,
or at least for the node to be segmented from cluster by the other nodes.

Regards,
-- 
Ilya Kasnacheev


ср, 14 окт. 2020 г. в 16:00, bbellrose :

> Oct 12 17:47:39 nalrcsvridbq02 Ignite[2031634]: [17:47:39] Possible failure
> suppressed accordingly to a configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
> o.a.i.IgniteException: GridWorker [name=nio-acceptor-tcp-comm,
> igniteInstanceName=RailConnect Ignite QA Grid, finished=false,
> heartbeatTs=1602539219763]]]
> Oct 12 17:48:20 nalrcsvridbq02 Ignite[2031634]: [17:48:20] Possible failure
> suppressed accordingly to a configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
> o.a.i.IgniteException: GridWorker [name=grid-nio-worker-tcp-comm-1,
> igniteInstanceName=RailConnect Ignite QA Grid, finished=false,
> heartbeatTs=1602539260020]]]
> Oct 12 17:48:20 nalrcsvridbq02 Ignite[2031634]: [17:48:20] Possible failure
> suppressed accordingly to a configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
> o.a.i.IgniteException: GridWorker [name=ttl-cleanup-worker,
> igniteInstanceName=RailConnect Ignite QA Grid, finished=false,
> heartbeatTs=1602539260020]]]
> Oct 12 17:48:20 nalrcsvridbq02 Ignite[2031634]: [17:48:20] Possible failure
> suppressed accordingly to a configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
> o.a.i.IgniteException: GridWorker [name=db-checkpoint-thread,
> igniteInstanceName=RailConnect Ignite QA Grid, finished=false,
> heartbeatTs=1602539260006]]]
> Oct 12 17:49:00 nalrcsvridbq02 chronyd[1216]: Forward time jump detected!
> Oct 12 17:49:00 nalrcsvridbq02 chronyd[1216]: Can't synchronise: no
> selectable sources
> Oct 12 17:49:00 nalrcsvridbq02 process-agent[2039258]: 2020-10-12 17:49:00
> EDT | PROCESS | INFO | (collector.go:209 in func1) | Delivery queues:
> process[size=0, weight=0], pod[size=0, weight=0]
> Oct 12 17:49:00 nalrcsvridbq02 agent[2039257]: 2020-10-12 17:49:00 EDT |
> CORE | ERROR | (pkg/forwarder/worker.go:178 in process) | Error while
> processing transaction: error while sending transaction, rescheduling it:
> Post
>
> https://7-22-1-app.agent.datadoghq.com/api/v1/series?api_key=*44602
> :
> net/http: request canceled (Client.Timeout exceeded while awaiting headers)
> Oct 12 17:49:00 nalrcsvridbq02 trace-agent[2039259]: 2020-10-12 17:49:00
> EDT
> | TRACE | INFO | (pkg/trace/info/stats.go:101 in LogStats) | No data
> received
> Oct 12 17:49:00 nalrcsvridbq02 Ignite[2031634]: [17:49:00] Topology
> snapshot
> [ver=21, locNode=2eca41b3, servers=1, clients=0, state=ACTIVE, CPUs=2,
> offheap=2.0GB, heap=0.25GB]
> Oct 12 17:49:00 nalrcsvridbq02 Ignite[2031634]: [17:49:00]   ^-- Baseline
> [id=0, size=2, online=1, offline=1]
> Oct 12 17:49:00 nalrcsvridbq02 Ignite[2031634]: [17:49:00] (err) Failed to
> execute compound future reducer: GridNearTxFinishFuture
> [futId=7a28b630571-b3eac955-0171-4b45-b048-84653e88427e, tx=GridNearTxLocal
> [mappings=IgniteTxMappingsSingleImpl [mapping=GridDistributedTxMapping
> [entries=LinkedHashSet [IgniteTxEntry [txKey=IgniteTxKey
> [key=KeyCacheObject
> [hasValBytes=true], cacheId=-27866919], val=BinaryObject
> [idHash=1523169004,
> hash=1743117496][op=CREATE, val=], prevVal=[op=NOOP, val=null],
> oldVal=[op=NOOP, val=null], entryProcessorsCol=null, ttl=-1,
> conflictExpireTime=-1, conflictVer=null, explicitVer=null, dhtVer=null,
> filters=CacheEntryPredicate[] [], filtersPassed=false, filtersSet=true,
> entry=GridDhtDetachedCacheEntry [super=GridDistributedCacheEntry
> [super=GridCacheMapEntry [key=KeyCacheObject [hasValBytes=true], val=null,
> ver=GridCacheVersion [topVer=0,

Re: java.lang.IllegalStateException: Getting affinity for too old topology version that is already out of history

2020-10-15 Thread Ilya Kasnacheev
Hello!

Why not, if it fixes the issue for you. Then you can try 2.9.0 once it is
out.

Unfortunately, a small snippet of server log is not sufficient, complete
log is needed. I can already spot some communication problems, though.

Regards,
-- 
Ilya Kasnacheev


чт, 15 окт. 2020 г. в 09:50, swara :

> Hello
>
> We are using this type of cache design from last 3 years with 2.5.6
> version.
> we never got any issue.
> When we upgraded to 2.8.1 we are getting this type of errors.
> Should we downgrade to 2.8.0 or 2.5.6. Please suggest?
>
> Thank You
> Swara
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: 2.8.1 : EVT_NODE_RECONNECTED, EVT_NODE_SEGMENTED on the client side

2020-10-13 Thread Ilya Kasnacheev
Hello!

You can just set clientReconnectEnabled to false. Then, if it is out of
topology, just start a new one to make sure you have a fresh start.

Not sure about other events. Maybe handle ACTIVATED/DEACTIVATED?

Regards,
-- 
Ilya Kasnacheev


вт, 13 окт. 2020 г. в 14:49, VeenaMithare :

> Thanks Ilya,
>
> >>I don't think so. Node can reconnect to a *different* cluster, or it may
> be disconnected for a while and return to the same cluster. For client
> node,
> segmented/disconnected difference is not relevant. Neither it is for server
> node, in fact.
>
> In  that case, we will have the same handler for segmented and reconnected
> events. The handler will restart ignite and deploy any continuous queries
> as
> needed.
> Thank you for this guidance.
>
> Does the server or the client need to have a handler for any other 'cluster
> specific' events ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: 2.8.1 : EVT_NODE_RECONNECTED, EVT_NODE_SEGMENTED on the client side

2020-10-13 Thread Ilya Kasnacheev
Hello!

I don't think so. Node can reconnect to a *different* cluster, or it may be
disconnected for a while and return to the same cluster. For client node,
segmented/disconnected difference is not relevant. Neither it is for server
node, in fact.

Regards,
-- 
Ilya Kasnacheev


вт, 13 окт. 2020 г. в 10:18, VeenaMithare :

> Hi Team,
>
>
> In terms of the client, do we need to have the below 2 EVENT handlers? :
> 1. EVT_NODE_SEGMENTED - to restart the node and deploy and continuous
> queries if needed.
> 2. EVT_CLIENT_NODE_RECONNECTED - to just deploy the continuous queries ( no
> restart of ignite needed, since the node is already connected )
>
> regards,
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ContinuousQuery Batch updates

2020-10-13 Thread Ilya Kasnacheev
Hello!

I think you may need to write a custom affinity function for your use case,
which will confine every cache to a single primary node.

Regards,
-- 
Ilya Kasnacheev


вт, 13 окт. 2020 г. в 11:18, ssansoy :

> Hi, thanks for the reply again!
>
> 1. @AffinityKeyMapped is not deprecated as you mentioned, but
> AffinityKeyMapper is (it seems the AffinityKeyMapper is used in places
> where
> the annotation cannot be - e.g. our case). if we use the AFFINITY_KEY
> clause
> on the table definition, we don't want to select a field of the table as
> the
> key - instead we want to use the cache name. Can this be a string literal
> here? e.g. AFFINITY_KEY='MY_CACHE' so the same affinity key is generated
> for
> every entry in the table?
>
> 2. "If it's the same node for all keys, all processing will happen on that
> node" - This may be ok in our case. Are there any issues that may affect
> "correctness" of the data, as opposed to performance of the processing?
>
> 3. "It depends on what you are trying to do." - we just want to be able to
> write e.g 2 records in a transaction via some writing process, and be able
> to read them somewhere else as soon as they are written to the cluster so
> they can be used.
> We can probably write some custom logic to wait for all entries to arrive
> at
> the client, and then batch them up - possibly by versioning them or
> maintaining some other state about the transaction in a separate cache on
> the cluster - but were hoping there would be some way of doing this out of
> the box with a distributed cache solution - e.g. 2 records are written in a
> transaction, and the client is updated with those 2 records in one
> callback.
> The docs for ContinuousQueryWithTransformer.EventListener imply this kind
> of
> thing should be possible (e.g. "called after one or more entries have been
> updated" and the onUpdated method receives an iterable:
>
> "public interface EventListener {
> /**
>  * Called after one or more entries have been updated.
>  *
>  * @param events The entries just updated that transformed with
> remote transformer of {@link ContinuousQueryWithTransformer}.
>  */
> void onUpdated(Iterable events);
> }"
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: java.lang.IllegalStateException: Getting affinity for too old topology version that is already out of history

2020-10-13 Thread Ilya Kasnacheev
Hello!

How often do you create or drop tables? Minor version of 255 suggests a lot
of operations.

Can you share logs for your nodes prior to that error?

Regards,
-- 
Ilya Kasnacheev


пн, 12 окт. 2020 г. в 16:04, swara :

> one per table, whenever the table definition/data changes
> hourly/daily/weekly.
>
> please suggest if there is any other way to do joins without using cache?
>
> Thank You
> Swara
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ContinuousQuery Batch updates

2020-10-12 Thread Ilya Kasnacheev
Hello!

1. I don't think that AffinityKeyMapped is deprecated, but there are cases
when it is ignored :(
You can use affinity_key clause in CREATE TABLE ... WITH.
2. If it's the same node for all keys, all processing will happen on that
node.
3. It depends on what you are trying to do.
4. I don't think you can since you're not supposed to.

Regards,
-- 
Ilya Kasnacheev


пн, 12 окт. 2020 г. в 14:04, ssansoy :

> Thanks, this is what I have ended up doing. However, it looks like
> AffinityKeyMapper is deprecated?
> I am adding an implementation of this (which returns the binary typename of
> the key BinaryObject) - and this does seem to have the desired effect (e.g.
> all keys with the same typename are marked as primary on a single node). I
> set this implementing class on the cache configuration.
>
> I don't think I can use the suggested @AffinityKeyMapped annotation because
> we don't have a type representing the key that we can add this to. Our
> caches are created via table creation DDL with:
>
> WITH "TEMPLATE=MY_TEMPLATE,value_type=SOME_TABLE_TYPE,
> key_type=SOME_TABLE_KEY"
>
> The value and keys we operate on are all BinaryObjects.
>
> In terms of design, we have to have some sort of expectation of
> transactional safety. E.g. a user can update 2 records in a cache (e.g.
> some
> calculation inputs) which need to both be seen at the same time in order
> execute logic as they are updated.
>
> Could you please advise if:
>
> 1. There is an alternative to the deprecated AffinityKeyMapper we should be
> using instead?
> 2. What side effects there might be to having all keys marked as primary on
> a single node (even though the caches are marked as REPLICATED).
> 3. If there is any other more robust way of achieving this?
> 4. How we can tune the locallisten thread to be single threaded for a
> particular query. We want to eliminate any chance of a locallisten being
> hit
> in parallel for updates to the same cache (ideally without implementing our
> own synchronization). This seemed to happen in practice for singular
> updates
> on the cluster happening concurrently, but after setting the pageSize and
> timeInterval - these 3 updates from the 3 nodes seemed to come in
> concurrently.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: enable sqlOffloadingEnabled sql avg fun has error

2020-10-12 Thread Ilya Kasnacheev
Hello!

Please use Apache Ignite JIRA to file tickets:
https://issues.apache.org/jira/projects/IGNITE/issues

Does this issue reproduce on any Apache Ignite versions? Please try some
2.9 nightly build:
https://ci.ignite.apache.org/buildConfiguration/Releases_NightlyRelease_RunApacheIgniteNightlyRelease?branch=ignite-2.9=overview=builds

Regards,
-- 
Ilya Kasnacheev


пн, 12 окт. 2020 г. в 19:02, 张立鑫 :

> Your can see https://github.com/gridgain/gridgain/issues/1490
>
> 2020-10-13 0:00 GMT+08:00, LixinZhang :
> > sqlOffloadingEnabled : true
> >
> > create table tests(
> > ID int,
> > CREATETIME TIMESTAMP(29,9),
> > PRIMARY KEY (ID)
> > ) WITH "template=partitioned, CACHE_NAME=tests";
> >
> > select AVG(EXTRACT(MILLISECOND from CREATETIME))
> > from tests;
> >
> > Failed to run reduce query locally. General error:
> > "java.lang.ArrayIndexOutOfBoundsException: 1"; SQL statement:
> > SELECT
> > CAST((CAST(SUM(("__C0_0" * "__C0_1")) AS BIGINT) / CAST(SUM("__C0_1") AS
> > BIGINT)) AS BIGINT) AS "__C0_0"
> > FROM "PUBLIC"."__T0" [5-199]
> > javax.cache.CacheException: Failed to run reduce query locally. General
> > error: "java.lang.ArrayIndexOutOfBoundsException: 1"; SQL statement:
> > SELECT
> > CAST((CAST(SUM(("__C0_0" * "__C0_1")) AS BIGINT) / CAST(SUM("__C0_1") AS
> > BIGINT)) AS BIGINT) AS "__C0_0"
> > FROM "PUBLIC"."__T0" [5-199]
> >
> >
> >
> > I use Gridgain community version, but i believe all versions have this
> > problem.
> > I test it for Gridgain ignite version: 8.7.15 ~ 8.7.27
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >
>


Re: Apache Ignite Rest End points

2020-10-12 Thread Ilya Kasnacheev
Hello!

1) Both 3rd party and native persistent store are always flushed (not
counting writeBehind). There's no option to force flush on either one.
2) This sounds like a task for K8s operator (a separate tool), not Apache
Ignite API. You can check out the GridGain's operator progress.

Regards,
-- 
Ilya Kasnacheev


пн, 12 окт. 2020 г. в 15:02, Wasim Bari :

> Hi All,
>
> Kindly let me know if ignite provides following rest endpoints:
> 1) To force flush data from cache to persistent store.
> 2) In kubernetes environment, to scale down/scale up number of baseline
> nodes.
>
> Regards
> Wasim Bari
>
>
>
> -
> Wasim Bari
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: java.lang.IllegalStateException: Getting affinity for too old topology version that is already out of history

2020-10-12 Thread Ilya Kasnacheev
Hello!

How many caches do you have? How often are they created?

Regards,
-- 
Ilya Kasnacheev


пн, 12 окт. 2020 г. в 13:35, swara :

> We are creating caches for each table and joining tables.
>
> Thank You
> Swara
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: java.lang.IllegalStateException: Getting affinity for too old topology version that is already out of history

2020-10-12 Thread Ilya Kasnacheev
Hello!

Looks like you have an SQL query which executes for ages and ran out of
topology history.

That, or some other weird issue (you have minorTopVer of 250+ which is
unusual). Can you provide logs for your server nodes?

Do you have any short-lived caches which are created and destroyed
frequently? I advise against it, caches should be long-lived for optimal
performance and decreased risk of running into issues like this one.

Regards,
-- 
Ilya Kasnacheev


пн, 12 окт. 2020 г. в 09:15, swara :

> Hi
>
> Getting this error run time.
>
>
> ERROR IgniteInMemoryDAO:119 - [EVENT FAILURE Anonymous:null@unknown ->
> /DefaultName/com.inmemory.IgniteInMemoryDAO] Exception :
> java.lang.IllegalStateException: Getting affinity for too old topology
> version that is already out of history [locNode=TcpDiscoveryNode
> [id=cfb6e618-4a12-43db-b14a-68efdea05a8c,
> consistentId=cfb6e618-4a12-43db-b14a-68efdea05a8c, addrs=ArrayList [],
> sockAddrs=HashSet [], discPort=0, order=78, intOrder=0,
> lastExchangeTime=1601918903890, loc=true, ver=2.8.1#20200521-sha1:86422096,
> isClient=true], grp=SQL_PUBLIC_COLL_1602059401160,
> topVer=AffinityTopologyVersion [topVer=78, minorTopVer=254],
> lastAffChangeTopVer=AffinityTopologyVersion [topVer=78, minorTopVer=254],
> head=AffinityTopologyVersion [topVer=78, minorTopVer=255],
> history=[AffinityTopologyVersion [topVer=78, minorTopVer=255]]]
> at
>
> org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:817)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.assignment(GridCacheAffinityManager.java:255)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.assignment(GridCacheAffinityManager.java:237)
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.ReducePartitionMapper.stableDataNodesMap(ReducePartitionMapper.java:268)
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.ReducePartitionMapper.stableDataNodes(ReducePartitionMapper.java:212)
>
>
> Thank You
> Swara Yerukola.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ContinuousQuery Batch updates

2020-10-12 Thread Ilya Kasnacheev
Hello!

In this case you could use an affinity function which will put all these
entries on the same node, but it will mean that you no longer use any
distribution benefits.

I don't think it is a good design if you expect local listener to get a tx
worth of entries at once. Listener should ideally consider entries in
isolation.

Regards,
-- 
Ilya Kasnacheev


чт, 8 окт. 2020 г. в 19:06, ssansoy :

> Hi,
>
> We have an app that writes N records to the cluster (REPLICATED) - e.g.
> 10,000 records, in one transaction.
>
> We also have an app that issues a continuous query against the cluster,
> listening for updates to this cache.
> We'd like the app to receive all 10,000 records in one call into the
> localListener.
>
> We are observing that the continuous query only receives records one at a
> time.
> I have tried playing around with setPageSize and setTimeInterval, e.g.
> pageSize=12,000 timeInterval=10,000
>
> E.g. the query waits either 10 seconds for updates to take place, or until
> 12,000 updates have occurred. This does seem to be an improvement - but now
> rather than 10,000 calls to local listen, we now have 3 calls, e.g. for
> quantities 2000, 4500, 3500 for example. These quantities for each callback
> are consistently the same upon retries.
> Are we observing this behaviour because our cache keys are designated as
> "primary" on different nodes of the cluster? So we are effectively getting
> 1
> localListen callback per node with the number of entries that are marked as
> "primary" on that node?
>
> This is very problematic for us unfortunately, as we are migrating are apps
> to ignite, and a large part of the app processing expects all updates to
> arrive in one go so there is a consistent view of the write that has
> occurred. Is there anything we can do here to get this behaviour? ideally
> without even having to have a timeout and introducing extra delay?
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Binary memory restoration

2020-10-09 Thread Ilya Kasnacheev
Hello!

You do not need any optimizations here since the time between checkpoints
is configurable.

More checkpoints - faster recovery but more data needs to be written to
disk.
Fewer checkpoints - recovery will take more time but overall performance is
somewhat higher.

Regards,
-- 
Ilya Kasnacheev


пт, 9 окт. 2020 г. в 07:05, Raymond Wilson :

> Ilya,
>
> Does 2.9 have specific optimizations for checkpoints?
>
> Thanks,
> Raymond.
>
> On Fri, Oct 9, 2020 at 1:23 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I think the real saver is in decreasing amount of time between
>> checkpoints.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 6 окт. 2020 г. в 02:10, Raymond Wilson :
>>
>>> Thanks for the thoughts Ilya and Vladimir.
>>>
>>> We'll do a comparison with 2.9 when it releases to see if that makes any
>>> difference.
>>>
>>> One of the advantages with persistent storage is that it is effectively
>>> 'instant start'. Our WAL size is around 5Gb, perhaps this should be
>>> decreased to reduce system start-up time?
>>>
>>> Thanks,
>>> Raymond.
>>>
>>> On Wed, Sep 30, 2020 at 7:31 AM Vladimir Pligin 
>>> wrote:
>>>
>>>> It's possible that it happens because of
>>>> https://issues.apache.org/jira/browse/IGNITE-13068.
>>>> We need to scan the entire SQL primary index during startup in case you
>>>> have
>>>> at least on query entity configured.
>>>> As far as I can see it's going to be a part of the Ignite 2.9 release.
>>>>
>>>>
>>>>
>>>> --
>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>
>>>
>>>
>>> --
>>> <http://www.trimble.com/>
>>> Raymond Wilson
>>> Solution Architect, Civil Construction Software Systems (CCSS)
>>> 11 Birmingham Drive | Christchurch, New Zealand
>>> +64-21-2013317 Mobile
>>> raymond_wil...@trimble.com
>>>
>>>
>>> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>>>
>>
>
> --
> <http://www.trimble.com/>
> Raymond Wilson
> Solution Architect, Civil Construction Software Systems (CCSS)
> 11 Birmingham Drive | Christchurch, New Zealand
> +64-21-2013317 Mobile
> raymond_wil...@trimble.com
>
>
> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>


Re: [External]Re: Failed to process selector key frequently at client side

2020-10-09 Thread Ilya Kasnacheev
Hello!

I guess you will have to set it on all nodes in cluster, including client.
Also, 60s may be too long - maybe your network will start closing
connections earlier than that.

Regards,
-- 
Ilya Kasnacheev


пт, 9 окт. 2020 г. в 15:55, Kamlesh Joshi :

> Hi Ilya,
>
>
>
> We could see *idleConnTimeout* value is set at the server level as below.
> Do we have to set this value on both client and server?
>
>
>
>
>
> TcpCommunicationSpi [connectGate=null,
> connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@560348e6,
> enableForcibleNodeKill=false, enableTroubleshootingLog=false, locAddr=null,
> locHost=null, locPort=47100, locPortRange=100, shmemPort=-1,
> directBuf=true, directSndBuf=false, *idleConnTimeout=60*,
> connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
> sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, nioSrvr=null,
> shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
> tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32,
> unackedMsgsBufSize=0, sockWriteTimeout=6, boundTcpPort=-1,
> boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
> ctxInitLatch=java.util.concurrent.CountDownLatch@1df8b5b8[Count = 1],
> stopping=false],
>
>
>
>
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* 09 October 2020 17:14
> *To:* user@ignite.apache.org
> *Subject:* [External]Re: Failed to process selector key frequently at
> client side
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Hello!
>
>
>
> It seems that the network that you are using will close idle connections.
> You can try sidestepping it by specifying
> TcpCommunicationSpi.idleConnectionTimeout to some value such as 25000 ms.
>
>
>
> In this case Ignite will be first to close these connections explicitly.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> пт, 9 окт. 2020 г. в 09:37, Kamlesh Joshi :
>
> Hi Igniters,
>
>
>
> We have three server nodes, there is firewall between clients and servers.
>
>
>
> Opened 47500 and 47100 series ports, but  still below errors are printing
> at client side frequently. But client application able to get data from
> cluster, could please help us how to avoid these errors at client side.
>
>
>
> We are using Ignite 2.7.0 version.
>
>
>
> [2020-10-08T00:26:23,389][ERROR][grid-nio-worker-tcp-comm-0-#24%JBPLCustomer%][TcpCommunicationSpi]
> Failed to process selector key [ses=GridSelectorNioSessionImpl
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=0,
> bytesRcvd=84608, bytesSent=1725593, bytesRcvd0=0, bytesSent0=0,
> select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-0,
> igniteInstanceName=JBPLCustomer, finished=false, heartbeatTs=1602096980980,
> hashCode=1228592538, interrupted=false,
> runner=grid-nio-worker-tcp-comm-0-#24%JBPLCustomer%]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> inRecovery=GridNioRecoveryDescriptor [acked=48, resendCnt=0, rcvCnt=58,
> sentCnt=48, reserved=true, lastAck=58, nodeLeft=false,
> node=TcpDiscoveryNode [id=ef6723ee-f70d-4515-89b9-7a3e85ab5072,
> addrs=[0:0:0:0:0:0:0:1%lo, clientIP, 127.0.0.1,
> 2405:200:a60:fd04:20c:29ff:fe63:629f%eth0], sockAddrs=[/clientIP:0,
> 0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0
> <https://protect2.fireeye.com/v1/url?k=29480a65-756b19ca-2949809a-00259087546c-f07172281c791ac1=1=ae47e4b2-c5cf-41b8-9e41-a11ed2e2ec12=http%3A%2F%2F127.0.0.1%3A0%2F>,
> 2405:200:a60:fd04:20c:29ff:fe63:629f%eth0:0], discPort=0, order=72,
> intOrder=64, lastExchangeTime=1602089183465, loc=false,
> ver=2.7.0#20181201-sha1:256ae401, isClient=true], connected=true,
> connectCnt=0, queueLimit=4096, reserveCnt=1, pairedConnections=false],
> outRecovery=GridNioRecoveryDescriptor [acked=48, resendCnt=0, rcvCnt=58,
> sentCnt=48, reserved=true, lastAck=58, nodeLeft=false,
> node=TcpDiscoveryNode [id=ef6723ee-f70d-4515-89b9-7a3e85ab5072,
> addrs=[0:0:0:0:0:0:0:1%lo, clientIP, 127.0.0.1,
> 2405:200:a60:fd04:20c:29ff:fe63:629f%eth0], sockAddrs=[/clientIP:0,
> 0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0
> <https://protect2.fireeye.com/v1/url?k=ef3f61e8-b31c7247-ef3eeb17-00259087546c-d45fc1450e7efc2c=1=ae47e4b2-c5cf-41b8-9e41-a11ed2e2ec12=http%3A%2F%2F127.0.0.1%3A0%2F>,
> 2405:200:a60:fd04:20c:29ff:fe63:629f%eth0:0], discPort=0, order=72,
> intOrder=64, lastExchangeTime=1602089183465, loc=false,
> ver=2.7.0#20181201-sha1:256ae401, isClient=true], connected=

<    1   2   3   4   5   6   7   8   9   10   >