Re: Professional Ignite Course

2019-09-25 Thread Sudhir Patil
Hi Denis,

This is great to hear about course on Ignite.
Will there be one addressing Ignite in .Net technology?

Regards,
Sudhir

On Thursday, September 26, 2019, Denis Magda  wrote:

> Igniters,
>
> Recently, I've come across a professionally crafted course about Apache
> Ignite. It will be interesting for some of you:
> https://www.pluralsight.com/courses/apache-ignite-getting-started
>
> Edward, thanks for the course! I've added it to Ignite Doc's menu right
> below the book.
>
> -
> Denis
>


-- 
Thanks & Regards,
Sudhir Patil,
+91 9881095647.


Re: Professional Ignite Course

2019-09-25 Thread sri hari kali charan Tummala
I did went trough the course and I asked pluralsight to add more, in the
next course I would like to see ignite installation on aws and spark +
ignite integration.
Real time analytics with spark + ignite.


On Wednesday, September 25, 2019, Denis Magda  wrote:

> Igniters,
>
> Recently, I've come across a professionally crafted course about Apache
> Ignite. It will be interesting for some of you:
> https://www.pluralsight.com/courses/apache-ignite-getting-started
>
> Edward, thanks for the course! I've added it to Ignite Doc's menu right
> below the book.
>
> -
> Denis
>


-- 
Thanks & Regards
Sri Tummala


RE: Exception during exception handling

2019-09-25 Thread Andrey Davydov
I don’t know how fix deserialization, it seems very internal. But I can do 
something for exception handling, but only on weekend.

Andrey.

От: Denis Magda
Отправлено: 26 сентября 2019 г. в 0:30
Кому: user@ignite.apache.org
Копия: Ilya Kasnacheev
Тема: Re: Exception during exception handling

Andrey,

Would you mind sending a pull-request if you have a clear understanding of how 
this needs to be fixed?


-
Denis


On Wed, Sep 25, 2019 at 2:06 PM Andrey Davydov  wrote:
There are two root causes. 
 
The first one is deserialization of binary object for “toString” when it is 
impossible to deserialize (like IGNITE-12178). I saw stackowerflow discussion 
about fails on debug logging while googling current problem.
 
The second, but it may be more important, it is unsafe exception handling, 
information about origin error that cause transaction fail should not be lost 
anyway. It seems “try finaly” block should be implemented inside “catch” 
section. Because we have chance to get exception during exception handling 
anyway.
 
Andrey.
 
От: Denis Magda
Отправлено: 25 сентября 2019 г. в 22:36
Кому: user@ignite.apache.org; Ilya Kasnacheev
Тема: Re: Exception during exception handling
 
Hello Andrey and thanks for reporting!
 
This reminds me of this issue that has a similar stack trace:
https://issues.apache.org/jira/browse/IGNITE-12178
 
Ilya, looks like the root cause is absolutely the same, doesn't it?


-
Denis
 
 
On Wed, Sep 25, 2019 at 10:02 AM Andrey Davydov  
wrote:
In ignite 2.7.5 I got following exception:
 
org.apache.ignite.IgniteException: Failed to create string representation of 
binary object.
 at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1022)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:762)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:710)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.toString(IgniteTxLocalAdapter.java:1645)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.toString(GridDhtTxLocalAdapter.java:944)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.toString(GridDhtTxLocal.java:650)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at java.lang.String.valueOf(String.java:2994) ~[?:1.8.0_222]
 at java.lang.StringBuilder.append(StringBuilder.java:131) 
~[?:1.8.0_222]
 at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:942)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:796)
 ~[ignite-core-2.7.5.jar:2.7.5]….
 
As I found in sources, it was error during handling of other error. So this 
error masks real problem. 
See 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter 
lines 934-944:
 
    catch (Throwable ex) {
    // We are about to initiate transaction rollback when 
tx has started to committing.
    // Need to remove version from committed list.
    cctx.tm().removeCommittedTx(this);
 
    boolean isNodeStopping = X.hasCause(ex, 
NodeStoppingException.class);
    boolean hasInvalidEnvironmentIssue = X.hasCause(ex, 
InvalidEnvironmentException.class);
 
    IgniteCheckedException err0 = new 
IgniteTxHeuristicCheckedException("Failed to locally write to cache " +
    "(all transaction entries will be invalidated, 
however there was a window when " +
    "entries for this transaction were visible to 
others): " + this, ex);
 
Exception occurs during IgniteTxHeuristicCheckedException creation. So we got 
unhandled error and server halt instead of IgniteTxHeuristicCheckedException 
and rollback logic.
 
And the main issue: we lost any information about origin problem.
 
Andrey.
 
 



Re: Exception during exception handling

2019-09-25 Thread Denis Magda
Andrey,

Would you mind sending a pull-request if you have a clear understanding of
how this needs to be fixed?

-
Denis


On Wed, Sep 25, 2019 at 2:06 PM Andrey Davydov 
wrote:

> There are two root causes.
>
>
>
> The first one is deserialization of binary object for “toString” when it
> is impossible to deserialize (like IGNITE-12178). I saw stackowerflow
> discussion about fails on debug logging while googling current problem.
>
>
>
> The second, but it may be more important, it is unsafe exception
> handling, information about origin error that cause transaction fail should
> not be lost anyway. It seems “try finaly” block should be implemented
> inside “catch” section. Because we have chance to get exception during
> exception handling anyway.
>
>
>
> Andrey.
>
>
>
> *От: *Denis Magda 
> *Отправлено: *25 сентября 2019 г. в 22:36
> *Кому: *user@ignite.apache.org; Ilya Kasnacheev 
> *Тема: *Re: Exception during exception handling
>
>
>
> Hello Andrey and thanks for reporting!
>
>
>
> This reminds me of this issue that has a similar stack trace:
>
> https://issues.apache.org/jira/browse/IGNITE-12178
>
>
>
> Ilya, looks like the root cause is absolutely the same, doesn't it?
>
>
> -
>
> Denis
>
>
>
>
>
> On Wed, Sep 25, 2019 at 10:02 AM Andrey Davydov 
> wrote:
>
> In ignite 2.7.5 I got following exception:
>
>
>
> org.apache.ignite.IgniteException: Failed to create string representation
> of binary object.
>
>  at
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1022)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:762)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:710)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.toString(IgniteTxLocalAdapter.java:1645)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.toString(GridDhtTxLocalAdapter.java:944)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.toString(GridDhtTxLocal.java:650)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at java.lang.String.valueOf(String.java:2994) ~[?:1.8.0_222]
>
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
> ~[?:1.8.0_222]
>
>  at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:942)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:796)
> ~[ignite-core-2.7.5.jar:2.7.5]….
>
>
>
> As I found in sources, it was error during handling of other error. So
> this error masks real problem.
>
> See
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter
> lines 934-944:
>
>
>
> catch (Throwable ex) {
>
> // We are about to initiate transaction rollback
> when tx has started to committing.
>
> // Need to remove version from committed list.
>
> cctx.tm().removeCommittedTx(this);
>
>
>
> boolean isNodeStopping = X.hasCause(ex,
> NodeStoppingException.class);
>
> boolean hasInvalidEnvironmentIssue =
> X.hasCause(ex, InvalidEnvironmentException.class);
>
>
>
> IgniteCheckedException err0 = new
> IgniteTxHeuristicCheckedException("Failed to locally write to cache " +
>
> "(all transaction entries will be invalidated,
> however there was a window when " +
>
> "entries for this transaction were visible to
> others): " + this, ex);
>
>
>
> Exception occurs during IgniteTxHeuristicCheckedException creation. So we
> got unhandled error and server halt instead of 
> IgniteTxHeuristicCheckedException
> and rollback logic.
>
>
>
> And the main issue: we lost any information about origin problem.
>
>
>
> Andrey.
>
>
>
>
>


RE: Exception during exception handling

2019-09-25 Thread Andrey Davydov
There are two root causes. 

The first one is deserialization of binary object for “toString” when it is 
impossible to deserialize (like IGNITE-12178). I saw stackowerflow discussion 
about fails on debug logging while googling current problem.

The second, but it may be more important, it is unsafe exception handling, 
information about origin error that cause transaction fail should not be lost 
anyway. It seems “try finaly” block should be implemented inside “catch” 
section. Because we have chance to get exception during exception handling 
anyway.

Andrey.

От: Denis Magda
Отправлено: 25 сентября 2019 г. в 22:36
Кому: user@ignite.apache.org; Ilya Kasnacheev
Тема: Re: Exception during exception handling

Hello Andrey and thanks for reporting!

This reminds me of this issue that has a similar stack trace:
https://issues.apache.org/jira/browse/IGNITE-12178

Ilya, looks like the root cause is absolutely the same, doesn't it?


-
Denis


On Wed, Sep 25, 2019 at 10:02 AM Andrey Davydov  
wrote:
In ignite 2.7.5 I got following exception:
 
org.apache.ignite.IgniteException: Failed to create string representation of 
binary object.
 at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1022)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:762)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:710)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.toString(IgniteTxLocalAdapter.java:1645)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.toString(GridDhtTxLocalAdapter.java:944)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.toString(GridDhtTxLocal.java:650)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at java.lang.String.valueOf(String.java:2994) ~[?:1.8.0_222]
 at java.lang.StringBuilder.append(StringBuilder.java:131) 
~[?:1.8.0_222]
 at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:942)
 ~[ignite-core-2.7.5.jar:2.7.5]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:796)
 ~[ignite-core-2.7.5.jar:2.7.5]….
 
As I found in sources, it was error during handling of other error. So this 
error masks real problem. 
See 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter 
lines 934-944:
 
    catch (Throwable ex) {
    // We are about to initiate transaction rollback when 
tx has started to committing.
    // Need to remove version from committed list.
    cctx.tm().removeCommittedTx(this);
 
    boolean isNodeStopping = X.hasCause(ex, 
NodeStoppingException.class);
    boolean hasInvalidEnvironmentIssue = X.hasCause(ex, 
InvalidEnvironmentException.class);
 
    IgniteCheckedException err0 = new 
IgniteTxHeuristicCheckedException("Failed to locally write to cache " +
    "(all transaction entries will be invalidated, 
however there was a window when " +
    "entries for this transaction were visible to 
others): " + this, ex);
 
Exception occurs during IgniteTxHeuristicCheckedException creation. So we got 
unhandled error and server halt instead of IgniteTxHeuristicCheckedException 
and rollback logic.
 
And the main issue: we lost any information about origin problem.
 
Andrey.
 



Professional Ignite Course

2019-09-25 Thread Denis Magda
Igniters,

Recently, I've come across a professionally crafted course about Apache
Ignite. It will be interesting for some of you:
https://www.pluralsight.com/courses/apache-ignite-getting-started

Edward, thanks for the course! I've added it to Ignite Doc's menu right
below the book.

-
Denis


Re: Service Registry

2019-09-25 Thread Denis Magda
Hi,

Please check this documentation pages and let us know if you need any
clarification:
https://apacheignite.readme.io/docs/ignite-service
https://apacheignite.readme.io/docs/kubernetes-ip-finder

-
Denis


On Sat, Sep 21, 2019 at 6:27 PM narges saleh  wrote:

> Hi All,
>
> I understand that I need to set up an IP finder service for a new ignite
> POD and external applications to be able to locate ignite PODs. How does
> this service interact with the Kubernetes' service registry? How does an
> external application locate an ingite (micro)service?
>
> thanks
> Narges
>


Re: Too many file descriptors on ignite node

2019-09-25 Thread Denis Magda
Hi,

This sounds more like a need for configuration changes. You might need to
adjust a number of WAL segments and tweak WAL archive related parameters.
Please check this discussion below
http://apache-ignite-users.70518.x6.nabble.com/WAL-size-control-td18323.html

and WAL parameters of DataStorageConfiguration

 class.

-
Denis


On Mon, Sep 23, 2019 at 4:29 AM ihalilaltun 
wrote:

> Hi Igniters it is me again :)
>
> We are having a wierd behivor on some of cluster nodes. Cluster uses native
> persistance with MMAP disabled. Some clusters have too many wal files even
> if they are already deleted, but for some reason they are stll persisted on
> the disk. I do not have any logs on cluster or related machines, but I have
> ss's and file descriptors list;
> here is the ss from related node;
> Screen_Shot_2019-09-23_at_14.png
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2515/Screen_Shot_2019-09-23_at_14.png>
>
> here is the file descriptor list;
> open_files.zip
> 
>
>
> I am not sure if this is related to
> https://issues.apache.org/jira/browse/IGNITE-12127, i hope it is, since we
> are planning to upgrade all clusters before end of this week. If this is
> not
> related to IGNITE-12127, then any comments how this is possible.
>
> cheers
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Application for Ignite Contributor

2019-09-25 Thread Denis Magda
Hi Deepak and welcome!

Sorry for the late response. You sent this message to the user list,
forwarded it to the dev list.

I added you to the Ignite contributors list in JIRA. Feel free to take over
any ticket you like. Do you have any specific interest (SQL, ML, caching,
etc.?

-
Denis


On Sat, Sep 21, 2019 at 2:35 AM Deepak Nigam 
wrote:

> Hello all,
>
> Please consider my application to become an Ignite Contributor. Here are
> the details:
>
> Full Name: Deepak Nigam
> ASF User Name: deepaknigam
> Email Address: deepaknigam.1...@gmail.com
>
> I am already a committer in the Apache OFBiz project, hence already signed
> ICLA.
>
> Regards,
> --
> Deepak Nigam
>


Re: Ignite transaction recovery on third-party persistence

2019-09-25 Thread Denis Magda
SQL APIs are not transactional yet. You need to use key-value calls within
Ignite transactions if ACID guarantees are required.

SQL will become fully transactional once MVCC becomes ready for the GA
release.
https://apacheignite.readme.io/docs/multiversion-concurrency-control

-
Denis


On Wed, Sep 25, 2019 at 5:28 AM bijunathg  wrote:

> Thanks Denis & Ilya!
> We found an issue if the transaction coordinator exits after successfully
> committing to the third-party store, but before propagating the commit to
> the Ignite server nodes in the cluster. We observe two behaviors:
> 1. *If the committed transaction had insert() statements, those entries are
> not available in cache*
> 2. If the committed transaction had update() statements, those updated
> entries are correctly available in cache.
>
> Any explanation for this behavior?
>
> Note: Our CacheAtomicityMode is set as TRANSACTIONAL and
> CacheWriteSynchronizationMode is set as FULL_SYNC. Also our application
> pre-loads all entries in cache at startup and we serve SQL queries from
> cache.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Exception during exception handling

2019-09-25 Thread Denis Magda
Hello Andrey and thanks for reporting!

This reminds me of this issue that has a similar stack trace:
https://issues.apache.org/jira/browse/IGNITE-12178

Ilya, looks like the root cause is absolutely the same, doesn't it?

-
Denis


On Wed, Sep 25, 2019 at 10:02 AM Andrey Davydov 
wrote:

> In ignite 2.7.5 I got following exception:
>
>
>
> org.apache.ignite.IgniteException: Failed to create string representation
> of binary object.
>
>  at
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1022)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:762)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:710)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.toString(IgniteTxLocalAdapter.java:1645)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.toString(GridDhtTxLocalAdapter.java:944)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.toString(GridDhtTxLocal.java:650)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at java.lang.String.valueOf(String.java:2994) ~[?:1.8.0_222]
>
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
> ~[?:1.8.0_222]
>
>  at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:942)
> ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:796)
> ~[ignite-core-2.7.5.jar:2.7.5]….
>
>
>
> As I found in sources, it was error during handling of other error. So
> this error masks real problem.
>
> See
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter
> lines 934-944:
>
>
>
> catch (Throwable ex) {
>
> // We are about to initiate transaction rollback
> when tx has started to committing.
>
> // Need to remove version from committed list.
>
> cctx.tm().removeCommittedTx(this);
>
>
>
> boolean isNodeStopping = X.hasCause(ex,
> NodeStoppingException.class);
>
> boolean hasInvalidEnvironmentIssue =
> X.hasCause(ex, InvalidEnvironmentException.class);
>
>
>
> IgniteCheckedException err0 = new
> IgniteTxHeuristicCheckedException("Failed to locally write to cache " +
>
> "(all transaction entries will be invalidated,
> however there was a window when " +
>
> "entries for this transaction were visible to
> others): " + this, ex);
>
>
>
> Exception occurs during IgniteTxHeuristicCheckedException creation. So we
> got unhandled error and server halt instead of 
> IgniteTxHeuristicCheckedException
> and rollback logic.
>
>
>
> And the main issue: we lost any information about origin problem.
>
>
>
> Andrey.
>
>
>


Exception during exception handling

2019-09-25 Thread Andrey Davydov
In ignite 2.7.5 I got following exception:

org.apache.ignite.IgniteException: Failed to create string representation of 
binary object.
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1022)
 ~[ignite-core-2.7.5.jar:2.7.5]
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:762)
 ~[ignite-core-2.7.5.jar:2.7.5]
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:710)
 ~[ignite-core-2.7.5.jar:2.7.5]
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.toString(IgniteTxLocalAdapter.java:1645)
 ~[ignite-core-2.7.5.jar:2.7.5]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.toString(GridDhtTxLocalAdapter.java:944)
 ~[ignite-core-2.7.5.jar:2.7.5]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.toString(GridDhtTxLocal.java:650)
 ~[ignite-core-2.7.5.jar:2.7.5]
at java.lang.String.valueOf(String.java:2994) ~[?:1.8.0_222]
at java.lang.StringBuilder.append(StringBuilder.java:131) ~[?:1.8.0_222]
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:942)
 ~[ignite-core-2.7.5.jar:2.7.5]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:796)
 ~[ignite-core-2.7.5.jar:2.7.5]….

As I found in sources, it was error during handling of other error. So this 
error masks real problem. 
See 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter 
lines 934-944:

catch (Throwable ex) {
// We are about to initiate transaction rollback when 
tx has started to committing.
// Need to remove version from committed list.
cctx.tm().removeCommittedTx(this);

boolean isNodeStopping = X.hasCause(ex, 
NodeStoppingException.class);
boolean hasInvalidEnvironmentIssue = X.hasCause(ex, 
InvalidEnvironmentException.class);

IgniteCheckedException err0 = new 
IgniteTxHeuristicCheckedException("Failed to locally write to cache " +
"(all transaction entries will be invalidated, 
however there was a window when " +
"entries for this transaction were visible to 
others): " + this, ex);

Exception occurs during IgniteTxHeuristicCheckedException creation. So we got 
unhandled error and server halt instead of IgniteTxHeuristicCheckedException 
and rollback logic.

And the main issue: we lost any information about origin problem.

Andrey.



Re: Failed to read magic header (too few bytes received)

2019-09-25 Thread Marco Bernagozzi
Update 2:

Digging more in the logging, the issue seems to be:

[tcp-disco-ip-finder-cleaner-#5] ERROR
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Failed to clean IP
finder up.
class org.apache.ignite.spi.IgniteSpiException: Failed to list objects in
the bucket: ignite-configurations-production
at
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.getRegisteredAddresses(TcpDiscoveryS3IpFinder.java:192)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1900)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.cleanIpFinder(ServerImpl.java:1998)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.body(ServerImpl.java:1973)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Caused by: com.amazonaws.SdkClientException: Failed to sanitize XML
document destined for handler class
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler
at
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:214)
at
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseListBucketObjectsResponse(XmlResponsesSaxParser.java:298)
at
com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:70)
at
com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:59)
at
com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
at
com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
at
com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1554)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1272)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4137)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4079)
at
com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:819)
at
com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:791)
at
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.getRegisteredAddresses(TcpDiscoveryS3IpFinder.java:146)
... 4 more
Caused by: com.amazonaws.AbortedException:
at
com.amazonaws.internal.SdkFilterInputStream.abortIfNeeded(SdkFilterInputStream.java:53)
at
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:81)
at
com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.base/java.io.InputStreamReader.read(InputStreamReader.java:185)
at java.base/java.io.BufferedReader.read1(BufferedReader.java:210)
at java.base/java.io.BufferedReader.read(BufferedReader.java:287)
at java.base/java.io.Reader.read(Reader.java:229)
at
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:186)
... 24 more
[tcp-disco-msg-worker-#2] INFO
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - New next node
[newNext=TcpDiscoveryNode [id=bc657c40-27dd-4190-af04-e53068176937,
addrs=[10.0.11.210, 127.0.0.1, 172.17.0.1, 172.18.0.1],
sockAddrs=[ip-172-17-0-1.eu-west-1.compute.internal/172.17.0.1:47500, /
127.0.0.1:47500, production-algo-spot-instance-ASG/10.0.31.153:47500, /
10.0.11.210:47500, ip-172-18-0-1.eu-west-1.compute.internal/172.18.0.1:47500],
discPort=47500, order=8, intOrder=7, lastExchangeTime=1569318655381,
loc=false, ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]]

On Tue, 24 Sep 2019 at 15:59, Marco Bernagozzi 
wrote:

> It was on aws, and I don't have the IP log of all the instances I had up.
> My best guess it's that it was just one of the slave instances.
> I have two sets of machines, masters and slaves. They are all servers. The
> masters create caches and distribute caches to a set of slaves using a node
> filter.
> Here are the options I'm using to run it
>
> CMD ["java", "-jar", "-XX:+AlwaysPreTouch", 

Re: Ignite bulk data load issue

2019-09-25 Thread Evgenii Zhuravlev
Well, yes, that's definitely could be the reason - it's probably not
enough. To make initial data load faster, you can either disable WAL for
some time or move WAL to the separate disk:
https://apacheignite.readme.io/docs/durable-memory-tuning#section-separate-disk-device-for-wal

Also, you can just choose io1 disk type or give more IOPS to the disk - it
will help too.

Best Regards,
Evgenii

вт, 24 сент. 2019 г. в 11:54, Muhammed Favas <
favas.muham...@expeedsoftware.com>:

> Thanks Evgenii,
>
>
>
> My cluster is AWS EC2 and below is the disk specification on each node
>
>
>
>
>
> *Regards,*
>
> *Favas  *
>
>
>
> *From:* Evgenii Zhuravlev 
> *Sent:* Tuesday, September 24, 2019 12:44 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Ignite bulk data load issue
>
>
>
> Hi,
>
>
>
> Looks like you can have really slow disks. What kind of disk do you have
> there? I see throttling in logs, Because the write operation is really
> slow.
>
>
>
> Evgenii
>
>
>
> пн, 23 сент. 2019 г. в 13:07, Muhammed Favas <
> favas.muham...@expeedsoftware.com>:
>
> Hi,
>
>
>
> I need help to figure out the issues identified during the bulk load of
> data into ignite cluster.
>
> My cluster consist of 5 nodes, each with 8 core CPU, 32 GB RAM and 30GB
> Disk. Also ignite native persistence is enabled for the table.
>
> I am trying to load data into my ignite SQL table from csv file using COPY
> command. Each file consist of 50 Million record and I have numerous file of
> same size. During the initial time of loading, it was quite fast, but after
> some time the data load become very slow and now it is taking hours to load
> even a single file. Below is my observations
>
>
>
>- When I trigger the load first time after a pause, the CPU usage
>shows in promising level and that time data load is in higher rate.
>- After loading 2-3 file, the CPU starts dropping down to less than 1
>% and it continue in that state for ever.
>- Then I stop the loading processes for some time and re-start, it
>again perform well and after some time the same situation happens.
>
>
>
> When I checked the log file, I saw certain THREAD WAIT are happening, I
> believe due to this waits, the CPU is dropping down. I have attached the
> entire log file.
>
> Can some one help me to figure out why it so in ignite? Or is it something
> I have made wrong in my configuration. Below is my configuration file
> content
>
>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>
> 
>
> 
>
> 
>
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>
> 
>
> 
>
> 
>
> 
>
> 
>
>  value="#{2L * 1024 * 1024 * 1024}"/>
>
> 
>
> 
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
>
>
> *Regards,*
>
> *Favas  *
>
>
>
>


Re: Ignite performance - sudden drop after upgrade from 2.3.0

2019-09-25 Thread Ilya Kasnacheev
Hello!

Please collect profiling (e.g. using JFR) from both runs and search for hot
spots in both scenarios. If you find anything suspicious, please file a
ticket.

We can also look at profiling results for you.

Unfortunately I'm not that good with yardstick to run your benchmarks
locally. It would be nice if you could turn it into a reproducer test.

Regards,
-- 
Ilya Kasnacheev


ср, 11 сент. 2019 г. в 12:16, Oleg Yurkivskyy (Luxoft) <
oleg.yurkivs...@tudor.com>:

> Hi,
>
>
>
> I have an application running on ignite cluster with 4 nodes.
>
>
>
> /usr/bin/java -version
>
> openjdk version "1.8.0_191"
>
> OpenJDK Runtime Environment (build 1.8.0_191-b12)
>
> OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
>
>
>
> The application runs on apache-ignite-2.3.0. I need to upgrade it to the
> latest version of ignite (currently 2.7.5)
>
> I have created a benchmarking project (maven) to check what to expect
> after upgrade.
>
> It is attached in archived file, called "data-grid-benchmarks".
>
> This is a copy of standard ignite benchmarking project from 2.3.0 release
> with a custom cache benchmarks CacheBenchmarkBin and CacheBenchmarkExt.
>
> Different classes use different serialization (Binary and Externalizable).
>
> I ran it on 2.3.0 2.4.0 and 2.7.5 versions of apache ignite (3 servers and
> 1 driver) with the following command:
>
>
>
> ./bin/benchmark-run-all.sh
> config/benchmark-company.properties
>
>
>
> It shows, that starting from 2.4.0 release there is a 2 times drop in
> performance of partitioned transactional cache.
>
> Replicated cache is not affected in such a way.
>
> Also performance doesn't depend on serialization.
>
> In attached archive you can see results of benchmarks generated by
> standard yardstick tool for different version of ignite.
>
> There you can see the drop of
> RELEASE-CacheBenchmarkBinPartitionTransactional-1-backup
>
>
>
> Apache Ignite version
>
> 2.3.0
>
> 2.4.0
>
> 2.7.5
>
> Average Test method Throughput (operations/sec)
>
> CacheBenchmarkBinPartitionTransactional-1-backup
>
> 8,845.84
>
> 4,101.48
>
> 3,956.17
>
> CacheBenchmarkBinReplicateTransactional-1-backup
>
> 2,204.89
>
> 2,197.45
>
> 2,124.80
>
>
>
> I need to answer the question what can be fixed in the code for test
> (CacheBenchmarkBin.test or ActionCallable) to avoid the drop of performance?
>
> Does it have something to do with the cache configuration
> (ignite-cache-partition-tx-config.xml)?
>
> Also, it could be the bug in ignite, that needs to be fixed.
>
>
>
> Regards,
>
> Oleg
>
>
>
> _
>
> This communication is intended only for the addressee(s) and may contain
> confidential information. We do not waive any confidentiality by
> misdelivery. If you receive this communication in error, any use,
> dissemination, printing or copying is strictly prohibited; please destroy
> all electronic and paper copies and notify the sender immediately.
>


Re: Ignite transaction recovery on third-party persistence

2019-09-25 Thread bijunathg
Thanks Denis & Ilya! 
We found an issue if the transaction coordinator exits after successfully
committing to the third-party store, but before propagating the commit to
the Ignite server nodes in the cluster. We observe two behaviors:
1. *If the committed transaction had insert() statements, those entries are
not available in cache*
2. If the committed transaction had update() statements, those updated
entries are correctly available in cache.

Any explanation for this behavior?

Note: Our CacheAtomicityMode is set as TRANSACTIONAL and
CacheWriteSynchronizationMode is set as FULL_SYNC. Also our application
pre-loads all entries in cache at startup and we serve SQL queries from
cache.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: possible ignite bug around binary marshaller and data affinity

2019-09-25 Thread Ilya Kasnacheev
Hello!

I have trouble contemplating your reproducer, but I think it's related to
the fact you are using QueryEntity.

QueryEntity means that your annotations are not used. In this case you
should use CacheKeyConfiguration to configure affinity for such caches:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheKeyConfiguration.html

Why it works for externalizable? It's hard to say, overall I think it's
related to https://issues.apache.org/jira/browse/IGNITE-9964 or similar.

Regards,



-- 
Ilya Kasnacheev


чт, 19 сент. 2019 г. в 17:26, Oleg Yurkivskyy (Luxoft) <
oleg.yurkivs...@tudor.com>:

> Hi,
>
>
>
> It appears that when using the default BinarySerialization, the
> @AffinityKeyMapped annotation doesn’t seem to work and queries that rely on
> affinity (Using setLocal) aren’t finding data which it would expect to find.
>
>
>
> Please find below a project that can reproduce the issue (versions 2.7.5
> and 2.7.6 are tested).
>
> It is based on standard yardstick benchmark.
>
> It can be run using this command after build:
>
>
>
> ./bin/benchmark-run-all.sh config/benchmark-company.properties
>
>
>
> The tests run on 3 instances of ignite server and 1 instance of driver.
>
> Test application populates the cache with some data using separate Java
> classes as a key and a value objects.
>
> Key class EntityKey contains affinity key tid.
>
> When cache is populated with test data benchmark runs
> affinityCall(ActionAffinityCallable) using affinity key.
>
> ActionAffinityCallable executes query twice, on time with .setLocal(true)
> and second time .setLocal(false).
>
> In normal situation both queries have to return the same data, because
> query returns the data using affinity key.
>
> It works OK when entity and the key are Java Externalizable.
>
> In case of binary marshaller these queries return different amount of
> data, that means that affinity key is not working as expected and data are
> spread across different nodes.
>
> It doesn't deend on query type, SqlQuery and ScanQuery have the same
> problems, while SqlQuery is much faster.
>
> It appears that when using the default BinarySerialization, the
> @AffinityKeyMapped annotation doesn’t seem to work and queries that rely on
> affinity (Using setLocal) aren’t finding data which it would expect to
> find. Please find below a project that can reproduce the issue
>
> The project uses different permutations of BinarySerialization and
> standard serialization used with SqlQueries and ScanQueries. If Affinity
> DID work, then the results of both setLocal(true) and setLocal(false)
> should return the same results (as we use the affinityCall function.
> However as you can see below, that is not true for BinarySerialization
>
>
>
> It can be seen in log files like this:
>
>
>
> $  grep -r  "\*\*\* AffinityBinSql" output/ |wc -l
>
> 641153
>
> $  grep -r  "\*\*\* AffinityBinScan" output/ |wc -l
>
> 40736
>
> $  grep -r  "\*\*\* AffinityExtSql" output/ |wc -l
>
> 0
>
> $  grep -r  "\*\*\* AffinityExtScan" output/ |wc -l
>
> 0
>
>
>
> The workaround of this bug is standard Java serialization or using
> .setLocal(false) for queries.
>
> Both workarounds result in higher CPU and network usage and slow things
> down.
>
> For example, .setLocal(false) is 3 times slower than .setLocal(true) for
> the application attached.
>
>
>
> Please recommend what can I do in this situation.
>
> Do I need to create an issue in Ignite Jira for this bug?
>
>
>
> Regards,
>
> Oleg
>
> _
>
> This communication is intended only for the addressee(s) and may contain
> confidential information. We do not waive any confidentiality by
> misdelivery. If you receive this communication in error, any use,
> dissemination, printing or copying is strictly prohibited; please destroy
> all electronic and paper copies and notify the sender immediately.
>


Re: How to update QueryEntities property of CacheConfiguration for existing/already created cache

2019-09-25 Thread siva
Thanks Pavel for helping,Ok i do go through  documentation for cache groups
and will check  once







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to update QueryEntities property of CacheConfiguration for existing/already created cache

2019-09-25 Thread Pavel Tupitsyn
Turns out we can't create multiple tables per cache with SQL DDL, this is a
design limitation.
The error message is misleading, I'll file a ticket to fix it.

So again, I would advise to use one table per cache approach.
To reduce the overhead of creating 1000s of caches, you can leverage Cache
Groups feature:
https://apacheignite.readme.io/docs/cache-groups

So instead of one cache per company, use one cache group per company.

Does this work for you?


On Wed, Sep 25, 2019 at 10:42 AM Pavel Tupitsyn 
wrote:

> Yes, you are correct, now I see the problem too. Let me investigate it a
> bit and get back to you.
>
> On Wed, Sep 25, 2019 at 10:37 AM siva  wrote:
>
>> Hi Pavel,
>> '
>>
>> actually Lecturer table not exists since the cache created with Student
>> query entity only ,if we try to create another table (lets say
>> HODepartment)
>> with already created cache also same exception is throwing .
>>
>>
>>
>>
>> Thanks
>> siva
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: How to update QueryEntities property of CacheConfiguration for existing/already created cache

2019-09-25 Thread Pavel Tupitsyn
Yes, you are correct, now I see the problem too. Let me investigate it a
bit and get back to you.

On Wed, Sep 25, 2019 at 10:37 AM siva  wrote:

> Hi Pavel,
> '
>
> actually Lecturer table not exists since the cache created with Student
> query entity only ,if we try to create another table (lets say
> HODepartment)
> with already created cache also same exception is throwing .
>
>
>
>
> Thanks
> siva
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: nodes are restarting when i try to drop a table created with persistence enabled

2019-09-25 Thread Denis Mekhanikov
I think, the issue is that Ignite can't recover from
IgniteOutOfMemory, even by removing data.
Shiva, did IgniteOutOfMemory occur for the first time when you did the
DROP TABLE, or before that?

Denis

ср, 25 сент. 2019 г. в 02:30, Denis Magda :
>
> Shiva,
>
> Does this issue still exist? Ignite Dev how do we debug this sort of thing?
>
> -
> Denis
>
>
> On Tue, Sep 17, 2019 at 7:22 AM Shiva Kumar  wrote:
>>
>> Hi dmagda,
>>
>> I am trying to drop the table which has around 10 million records and I am 
>> seeing "Out of memory in data region" error messages in Ignite logs and 
>> ignite node [Ignite pod on kubernetes] is restarting.
>> I have configured 3GB for default data region, 7GB for JVM and total 15GB 
>> for Ignite container and enabled native persistence.
>> Earlier I was in an impression that restart was caused by 
>> "SYSTEM_WORKER_BLOCKED" errors but now I am realized that  
>> "SYSTEM_WORKER_BLOCKED" is added to ignore failure list and the actual cause 
>> is " CRITICAL_ERROR " due to  "Out of memory in data region"
>>
>> This is the error messages in logs:
>>
>> ""[2019-09-17T08:25:35,054][ERROR][sys-#773][] JVM will be halted 
>> immediately due to the failure: [failureCtx=FailureContext 
>> [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: 
>> Failed to find a page for eviction [segmentCapacity=971652, loaded=381157, 
>> maxDirtyPages=285868, dirtyPages=381157, cpPages=0, pinnedInSegment=3, 
>> failedToPrepare=381155]
>> Out of memory in data region [name=Default_Region, initSize=500.0 MiB, 
>> maxSize=3.0 GiB, persistenceEnabled=true] Try the following:
>>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>>   ^-- Enable eviction or expiration policies]]
>>
>> Could you please help me on why drop table operation causing  "Out of memory 
>> in data region"? and how I can avoid it?
>>
>> We have a use case where application inserts records to many tables in 
>> Ignite simultaneously for some time period and other applications run a 
>> query on that time period data and update the dashboard. we need to delete 
>> the records inserted in the previous time period before inserting new 
>> records.
>>
>> even during delete from table operation, I have seen:
>>
>> "Critical system error detected. Will be handled accordingly to configured 
>> handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
>> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], 
>> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
>> o.a.i.IgniteException: Checkpoint read lock acquisition has been timed 
>> out.]] class org.apache.ignite.IgniteException: Checkpoint read lock 
>> acquisition has been timed out.|
>>
>>
>>
>> On Mon, Apr 29, 2019 at 12:17 PM Denis Magda  wrote:
>>>
>>> Hi Shiva,
>>>
>>> That was designed to prevent global cluster performance degradation or 
>>> other outages. Have you tried to apply my recommendation of turning of the 
>>> failure handler for this system threads?
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Sun, Apr 28, 2019 at 10:28 AM shivakumar  
>>> wrote:

 HI Denis,

 is there any specific reason for the blocking of critical thread, like CPU
 is full or Heap is full ?
 We are again and again hitting this issue.
 is there any other way to drop tables/cache ?
 This looks like a critical issue.

 regards,
 shiva



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to update QueryEntities property of CacheConfiguration for existing/already created cache

2019-09-25 Thread siva
Hi Pavel,
'

actually Lecturer table not exists since the cache created with Student
query entity only ,if we try to create another table (lets say HODepartment)
with already created cache also same exception is throwing .




Thanks
siva



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to update QueryEntities property of CacheConfiguration for existing/already created cache

2019-09-25 Thread Pavel Tupitsyn
Ok, so Lecturer table already exists when you try to create it again.
Just run DROP TABLE IF EXISTS Lecturer before trying to create it.

On Tue, Sep 24, 2019 at 12:26 PM siva  wrote:

> Hi Pavel,
>
> I have attached github link for the reproducer of above exception
>
>
> https://github.com/cvakarna/ignitereproducer
> 
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>