[GitHub] ololo3000 opened a new pull request #70: IGNITE-10215 Suite critical failures registration added

2018-11-14 Thread GitBox
ololo3000 opened a new pull request #70: IGNITE-10215 Suite critical failures 
registration added
URL: https://github.com/apache/ignite-teamcity-bot/pull/70
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ignite pull request #5389: Ignite 2.5.1 p160

2018-11-14 Thread antonovsergey93
Github user antonovsergey93 closed the pull request at:

https://github.com/apache/ignite/pull/5389


---


[jira] [Created] (IGNITE-10245) o.a.i.internal.util.nio.ssl.GridNioSslFilter failed with Assertion if invalid SSL Cipher suite name specified

2018-11-14 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-10245:
-

 Summary: o.a.i.internal.util.nio.ssl.GridNioSslFilter failed with 
Assertion if invalid SSL Cipher suite name specified
 Key: IGNITE-10245
 URL: https://issues.apache.org/jira/browse/IGNITE-10245
 Project: Ignite
  Issue Type: Task
Reporter: Alexey Kuznetsov
Assignee: Alexey Kuznetsov
 Fix For: 2.8


This issue is related to IGNITE-10189.

In case of invalid cipher suite name GridNioSslFilter  failed with assertion in 
org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter#sslHandler method.

 

Need to investigate and fix.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10247) TX benchmarks with TRANSACTIONAL_SNAPSHOT caches do not handle write conflicts

2018-11-14 Thread Ivan Artukhov (JIRA)
Ivan Artukhov created IGNITE-10247:
--

 Summary: TX benchmarks with TRANSACTIONAL_SNAPSHOT caches do not 
handle write conflicts
 Key: IGNITE-10247
 URL: https://issues.apache.org/jira/browse/IGNITE-10247
 Project: Ignite
  Issue Type: Bug
  Components: yardstick
Affects Versions: 2.7
Reporter: Ivan Artukhov


When I run e.g. IgnitePutGetTxBenchmark on a cache with TRANSACTIONAL_SNAPSHOT 
atomicity mode, I get the following exception and then benchmark driver stops:

{noformat}
Finishing main test [ts=1542181024722, date=Wed Nov 14 10:37:04 MSK 2018]
ERROR: Shutting down benchmark driver to unexpected exception.
Type '--help' for usage.
javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
Cannot serialize transaction due to write conflict (transaction is marked for 
rollback)
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1756)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1108)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:820)
at 
org.apache.ignite.yardstick.cache.IgnitePutGetTxBenchmark$1.call(IgnitePutGetTxBenchmark.java:56)
at 
org.apache.ignite.yardstick.cache.IgnitePutGetTxBenchmark$1.call(IgnitePutGetTxBenchmark.java:45)
at 
org.apache.ignite.yardstick.IgniteBenchmarkUtils.doInTransaction(IgniteBenchmarkUtils.java:80)
at 
org.apache.ignite.yardstick.cache.IgnitePutGetTxBenchmark.test(IgnitePutGetTxBenchmark.java:65)
at 
org.yardstickframework.impl.BenchmarkRunner$2.run(BenchmarkRunner.java:178)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteCheckedException: Cannot serialize 
transaction due to write conflict (transaction is marked for rollback)
at 
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7427)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:261)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:172)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.NearTxResultHandler.createResponse(NearTxResultHandler.java:80)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.NearTxResultHandler.createResponse(NearTxResultHandler.java:67)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.NearTxResultHandler.apply(NearTxResultHandler.java:107)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.NearTxResultHandler.apply(NearTxResultHandler.java:36)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:395)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:349)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:337)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:507)
at 
org.apache.ignite.internal.processors.cache.GridCacheFutureAdapter.onDone(GridCacheFutureAdapter.java:55)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:486)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.onDone(GridDhtTxAbstractEnlistFuture.java:1054)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:474)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.continueLoop(GridDhtTxAbstractEnlistFuture.java:564)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.init(GridDhtTxAbstractEnlistFuture.java:364)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.processNearTxEnlistRequest(GridDhtTransactionalCacheAdapter.java:2061)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.access$900(GridDhtTransactionalCacheAdapter.java:112)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$14.apply(GridDhtTransactionalCacheAdapter.java:229)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$14.apply(GridDhtTransactionalCacheAdapter.java:227)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMess

[jira] [Created] (IGNITE-10246) StandaloneWALRecordIterator must throw checkBounds exception

2018-11-14 Thread Alexey Stelmak (JIRA)
Alexey Stelmak created IGNITE-10246:
---

 Summary: StandaloneWALRecordIterator must throw checkBounds 
exception
 Key: IGNITE-10246
 URL: https://issues.apache.org/jira/browse/IGNITE-10246
 Project: Ignite
  Issue Type: Bug
Reporter: Alexey Stelmak






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ololo3000 opened a new pull request #71: IGNITE-9939 Empty failed suites displaying in blocker's count added

2018-11-14 Thread GitBox
ololo3000 opened a new pull request #71: IGNITE-9939  Empty failed suites 
displaying in blocker's count added
URL: https://github.com/apache/ignite-teamcity-bot/pull/71
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ignite pull request #5391: IGNITE-5115

2018-11-14 Thread NSAmelchev
GitHub user NSAmelchev opened a pull request:

https://github.com/apache/ignite/pull/5391

IGNITE-5115

Problem was that coordinator fails when process the fail message about 
itself. Reproducer attached to PR.
I have fixed this issue by disabling removing itself from the ring (like as 
on node leaving). When coordinator process message it will send verify message 
across ring and nodes will remove him from ring map. The new coordinator will 
send the discard message and ends the node fail process.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NSAmelchev/ignite ignite-5115

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5391.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5391


commit e8b41ba1886d736ee46be0559caa230e26e55936
Author: NSAmelchev 
Date:   2018-11-14T09:09:14Z

Fix coordinator fails




---


[GitHub] asfgit closed pull request #70: IGNITE-10215 Suite critical failures registration added

2018-11-14 Thread GitBox
asfgit closed pull request #70: IGNITE-10215 Suite critical failures 
registration added
URL: https://github.com/apache/ignite-teamcity-bot/pull/70
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/IgnitePersistentTeamcity.java
 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/IgnitePersistentTeamcity.java
index e970c973..e9befbe5 100644
--- 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/IgnitePersistentTeamcity.java
+++ 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/IgnitePersistentTeamcity.java
@@ -1116,6 +1116,16 @@ public ProblemOccurrences getProblems(int buildId) {
 return teamcity.getProblems(buildId);
 }
 
+/** {@inheritDoc} */
+@Deprecated
+@Override public ProblemOccurrences 
getProblemsAndRegisterCtiticals(BuildRef build) {
+ProblemOccurrences problems = teamcity.getProblems(build.getId());
+
+registerCriticalBuildProblemInStat(build, problems);
+
+return problems;
+}
+
 @Override
 public Statistics getStatistics(int buildId) {
 return teamcity.getStatistics(buildId);
diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/IgniteTeamcityConnection.java
 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/IgniteTeamcityConnection.java
index 5f044c06..71776546 100644
--- 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/IgniteTeamcityConnection.java
+++ 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/IgniteTeamcityConnection.java
@@ -317,6 +317,13 @@ public void init(@Nullable String tcName) {
 }
 }
 
+/** {@inheritDoc} */
+@Deprecated
+@AutoProfiling
+@Override public ProblemOccurrences 
getProblemsAndRegisterCtiticals(BuildRef build) {
+return getProblems(build.getId());
+}
+
 @Override
 @AutoProfiling
 public ProblemOccurrences getProblems(int buildId) {
diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/ignited/fatbuild/ProactiveFatBuildSync.java
 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/ignited/fatbuild/ProactiveFatBuildSync.java
index dd111dd4..3eab4372 100644
--- 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/ignited/fatbuild/ProactiveFatBuildSync.java
+++ 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/ignited/fatbuild/ProactiveFatBuildSync.java
@@ -287,7 +287,7 @@ public FatBuildCompacted reloadBuild(ITeamcityConn conn, 
int buildId, @Nullable
 }
 
 if (build.problemOccurrences != null)
-problems = conn.getProblems(buildId).getProblemsNonNull();
+problems = 
conn.getProblemsAndRegisterCtiticals(build).getProblemsNonNull();
 
 if (build.statisticsRef != null)
 statistics = conn.getStatistics(buildId);
diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/pure/ITeamcityConn.java
 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/pure/ITeamcityConn.java
index 52ae38bf..27dda7ee 100644
--- 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/pure/ITeamcityConn.java
+++ 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/pure/ITeamcityConn.java
@@ -72,4 +72,8 @@
 ChangesList getChangesList(int buildId);
 
 Change getChange(int changeId);
+
+/** */
+@Deprecated
+ProblemOccurrences getProblemsAndRegisterCtiticals(BuildRef build);
 }


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ignite pull request #5392: IGNITE-10189 Added SSL ciphers suites support for...

2018-11-14 Thread akuznetsov-gridgain
GitHub user akuznetsov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/5392

IGNITE-10189 Added SSL ciphers suites support for utilities.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10189-2.5.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5392.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5392


commit b18f5a7158cd7d04d8d4477e77854871bd4371e3
Author: ezagumennov 
Date:   2018-07-16T14:15:05Z

IGNITE-8745 Add ability to monitor TCP discovery ring information - Fixes 
#4256.

Signed-off-by: Ivan Rakov 

commit 489acccb2408ffb1ef103e94040a1c268fba2d31
Author: Dmitriy Govorukhin 
Date:   2018-07-17T13:48:43Z

IGNITE-8684 Fixed infinite loop of partition single/full messages when 
partition state does not change - Fixes #4287.

(cherry picked from commit dd47fab)

commit 7437c1a2060393891b69463865fdd1e843715232
Author: Dmitriy Govorukhin 
Date:   2018-07-17T14:23:28Z

Merge remote-tracking branch 'professional/ignite-2.5.1-master' into 
ignite-2.5.1-master

commit 0ae822cba971687256f7b8843e5ff61b7f6dc936
Author: Ivan Daschinskiy 
Date:   2018-07-17T13:52:22Z

IGNITE-8975 Invalid initialization of compressed archived WAL segment when 
WAL compression is switched off. - Fixes #4345.

Signed-off-by: Ivan Rakov 

(cherry picked from commit 46db052)

commit b5ce4f44fd767c178daa9b05590368b0e9e648cd
Author: Eduard Shangareev 
Date:   2018-07-12T16:26:48Z

IGNITE-8955 Fix of test after Checkpoint can't get write lock if massive 
eviction on node start started

(cherry picked from commit 584a88d)
Signed-off-by: EdShangGG 

commit 4e3d700056eb83403a25ccff4eed19d540e5e54d
Author: DmitriyGovorukhin 
Date:   2018-07-18T13:16:58Z

IGNITE-8929 Do not disable WAL if node does not have MOVING partitions. 
Fixes #4372

(cherry picked from commit 2b22933)

commit 4ac50fdc42b3df41bcf2054a0e8f7264929fd2e1
Author: Eduard Shangareev 
Date:   2018-07-20T14:22:52Z

IGNITE-9039 Fixed non-releasing pinned page on throttling fallback - Fixes 
#4390.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit 0b0)

commit afeaeea37a687c7329ae98a1a1925abf7727a812
Author: Evgeny Stanilovskiy 
Date:   2018-07-23T08:56:21Z

IGNITE-8892 Fixed OOME when scan query is used for a big partition - Fixes 
#4391.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit a164296)

commit 47d299c4961143f27d1f8aaa0d8c8e26b8ade7ed
Author: Evgeny Stanilovskiy 
Date:   2018-07-23T08:56:21Z

IGNITE-8892 Fixed OOME when scan query is used for a big partition - Fixes 
#4391.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit a164296)

commit a6a09865f776453ed88c093f4a58852ecb7efb0e
Author: Andrey V. Mashenkov 
Date:   2018-07-23T09:39:17Z

Merge remote-tracking branch 'origin/ignite-2.5.1-master' into 
ignite-2.5.1-master

commit e2695cd960b7f0cbbba545ef60645ea964f3d7fa
Author: Dmitriy Govorukhin 
Date:   2018-07-23T08:25:49Z

IGNITE-9042 Fixed partial tranasaction state wheh transaction is timed out 
- Fixes #4397.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit 33f485a)

commit 702c31859251738b187e8db2a8155264f39b8b52
Author: Ivan Daschinskiy 
Date:   2018-07-23T12:29:08Z

IGNITE-8820 Fix rollback logic when tx is initiated from client. - Fixes 
#4399.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit 5794eb0)

commit 32777e77797f4cf3260d42d4ca408336247d4e55
Author: Dmitriy Govorukhin 
Date:   2018-07-23T15:01:37Z

IGNITE-9049 Fixed write of SWITCH_SEGMENT_RECORD at the end of a segment 
file - Fixes #4401.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit 713a428)

commit bbf8f83afe3b07e807912d9bab78873edf698d4d
Author: Andrey V. Mashenkov 
Date:   2018-07-24T14:38:34Z

IGNITE-8892 Fixed CacheQueryFuture usage in DataStructures processor - 
Fixes #4415.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit 281a400)

commit e0404913088b5cd97cd79de1b62e6aaa4c00b5d2
Author: Andrey V. Mashenkov 
Date:   2018-07-24T15:24:57Z

Merge remote-tracking branch 'origin/ignite-2.5.1-master' into 
ignite-2.5.1-master

commit 86c52700a007368a351ac3595a8e38bb02923080
Author: Sergey Chugunov 
Date:   2018-07-25T13:26:12Z

IGNITE-9040 StopNodeFailureHandler is not able to stop node correctly on 
node segmentation

Signed-off-by: Andrey Gura 

(cherry-picked from commit#469aaba59c0539507972f4725642b2f2f81c08a0)

commit e1c3cc9d30baeaff4e653751775ab39a347b753b
Author: Pavel Kovalenko 
Date:   2018-07-24T14:48:57Z

IGNITE-8791 Fixed missed update counter in WAL data reco

Re: proposed design for thin client SQL management and monitoring (view running queries and kill it)

2018-11-14 Thread Denis Magda
Yury,

As I understand you mean that the view should contains both running and
> finished queries. If be honest for the view I was going to use just queries
> running right now. For finished queries I thought about another view with
> another set of fields which should include I/O related ones. Is it works?


Got you, so if only running queries are there then your initial proposal
makes total sense. Not sure we need a view of the finished queries. It will
be possible to analyze them through the updated DetailedMetrics approach,
won't it?

For "KILL QUERY node_id query_id"  node_id required as part of unique key
> of query and help understand Ignite which node start the distributed query.
> Use both parameters will allow cheap generate unique key across all nodes.
> Node which started a query can cancel it on all nodes participate nodes.
> So, to stop any queries initially we need just send the cancel request to
> node who started the query. This mechanism is already in Ignite.


Can we locate node_id behind the scenes if the user supplies query_id only?
A query record in the view already contains query_id and node_id and it
sounds like an extra work for the user to fill in all the details for us.
Embed node_id into query_id if you'd like to avoid extra network hops for
query_id to node_id mapping.

--
Denis

On Wed, Nov 14, 2018 at 1:04 AM Юрий  wrote:

> Denis,
>
> Under the hood 'time' will be as startTime, but for system view I planned
> use duration which will be simple calculated as now - startTime. So, there
> is't a performance issue.
> As I understand you mean that the view should contains both running and
> finished queries. If be honest for the view I was going to use just queries
> running right now. For finished queries I thought about another view with
> another set of fields which should include I/O related ones. Is it works?
>
> For "KILL QUERY node_id query_id"  node_id required as part of unique key
> of query and help understand Ignite which node start the distributed query.
> Use both parameters will allow cheap generate unique key across all nodes.
> Node which started a query can cancel it on all nodes participate nodes.
> So, to stop any queries initially we need just send the cancel request to
> node who started the query. This mechanism is already in Ignite.
>
> Native SQL APIs will automatically support the futures after implementing
> for thin clients. So we are good here.
>
>
>
> вт, 13 нояб. 2018 г. в 18:52, Denis Magda :
>
> > Yury,
> >
> > Please consider the following:
> >
> >- If we record the duration instead of startTime, then the former has
> to
> >be updated frequently - sounds like a performance red flag. Should we
> > store
> >startTime and endTime instead? This way a query record will be updated
> >twice - when the query is started and terminated.
> >- In the IEP you've mentioned I/O related fields that should help to
> >grasp why a query runs that slow. Should they be stored in this view?
> >- "KILL QUERY query_id" is more than enough. Let's not add "node_id"
> >unless it's absolutely required. Our queries are distributed and
> > executed
> >across several nodes that's why the node_id parameter is redundant.
> >- This API needs to be supported across all our interfaces. We can
> start
> >with JDBC/ODBC and thin clients and then support for the native SQL
> APIs
> >(Java, Net, C++)
> >- Please share examples of SELECTs in the IEP that would show how to
> >find long running queries, queries that cause a lot of I/O troubles.
> >
> > --
> > Denis
> >
> > On Tue, Nov 13, 2018 at 1:15 AM Юрий 
> wrote:
> >
> > > Igniters,
> > >
> > > Some comments for my original email's.
> > >
> > > The proposal related to part of IEP-29
> > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-29%3A+SQL+management+and+monitoring
> > > >
> > > .
> > >
> > > What purpose are we pursuing of the proposal?
> > > We want to be able check which queries running right now through thin
> > > clients. Get some information related to the queries and be able to
> > cancel
> > > a query if it required for some reasons.
> > > So, we need interface to get a running queries. For the goal we propose
> > > running_queries system view. The view contains unique query identifier
> > > which need to pass to kill query command to cancel the query.
> > >
> > > What do you think about fields of the running queries view? May be some
> > > useful fields we could easy add to the view.
> > >
> > > Also let's discuss syntax of cancellation of query. I propose to use
> > MySQL
> > > like syntax as easy to understand and shorter then Oracle and Postgres
> > > syntax ( detailed information in IEP-29
> > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-29%3A+SQL+management+and+monitoring
> > > >
> > > ).
> > >
> > >
> > >
> > > пн, 12 нояб. 2018 г. в 19:28, Юрий :
> > >
> > > > Igniters,
> > > >
> > > > Below is a proposed desig

[jira] [Created] (IGNITE-10248) MVCC TX: remove redundant partition checking from GridDhtTxAbstractEnlistFuture

2018-11-14 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-10248:
-

 Summary: MVCC TX: remove redundant partition checking from 
GridDhtTxAbstractEnlistFuture
 Key: IGNITE-10248
 URL: https://issues.apache.org/jira/browse/IGNITE-10248
 Project: Ignite
  Issue Type: Improvement
  Components: mvcc
Reporter: Igor Seliverstov
 Fix For: 2.8


We need to ensure that on unstable topology all queries (even those that 
doesn't require reducer) should execute with reducer (which support execution 
on unstable topology)

All verifications should be done inside 
{{*IgniteH2Indexing#prepareDistributedUpdate*}} method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5381: IGNITE-10235 Fix error with double cache register...

2018-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5381


---


[GitHub] ignite pull request #5284: IGNITE-7955: MVCC TX: cache peek for key-value AP...

2018-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5284


---


[GitHub] ignite pull request #5061: IGNITE-3467: Return "DATABASE" as catalog name.

2018-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5061


---


Re: Suggestion to improve deadlock detection

2018-11-14 Thread ipavlukhin

Hi Igniters,

I would like to resume the discussion about a deadlock detector. I start 
with a motivation for a further work on a subject. As I see current 
implementation (entry point IgniteTxManager.detectDeadlock) starts a 
detection only after a transaction was timed out. In my mind it is not 
very good from a product usability standpoint. As you know, in a 
situation of deadlock some keys become non-usable for an infinite amount 
of time. Currently the only way to work around it is configuring a 
timeout, but it could be rather tricky in practice to choose a 
proper/universal value for it. So, I see the main point as:


Ability to break deadlocks without a need to configure timeouts explicitly.

I will return soon with some thoughts about implementation. Meanwhile, 
does anybody have in mind any other usability points which I am missing? 
Or is there any alternative approaches?


On 2017/11/21 08:32:02, Dmitriy Setrakyan  wrote:
> On Mon, Nov 20, 2017 at 10:15 PM, Vladimir Ozerov >
> wrote:>
>
> > It doesn’t need all txes. Instead, other nodes will send info about>
> > suspicious txes to it from time to time.>
> >>
>
> I see your point, I think it might work.>
>


[GitHub] ignite pull request #5393: Ignite 2.5.1 p151

2018-11-14 Thread antonovsergey93
GitHub user antonovsergey93 opened a pull request:

https://github.com/apache/ignite/pull/5393

Ignite 2.5.1 p151



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.5.1-p151

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5393.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5393


commit d3b140f763433686dbdd3f156edfeec1561df531
Author: devozerov 
Date:   2018-07-04T14:11:00Z

IGNITE-8925: SQL: Limit default number of threads for CREATE INDEX to 4. 
This closes #4301.

(cherry picked from commit 5cc41df7ac33ce4664f7db1c1cf8b3cf33d71cc8)

commit 72848848268c6c0facbd96ebff7ed1f6de10de29
Author: a-polyakov 
Date:   2018-07-06T15:31:23Z

IGNITE-8620 Remove intOrder and loc keys from node info in control.sh --tx 
utility

Signed-off-by: Andrey Gura 
(cherry picked from commit 532dc79a1459befa757849487f7aef2cb8608cee)

commit f9fbf2a5b50ba623599292e4cc1bab8893e698d8
Author: ascherbakoff 
Date:   2018-07-11T14:09:05Z

IGNITE-8942 In some cases grid cannot be deactivated because of hanging CQ 
internal cleanup. - Fixes #4329.

Signed-off-by: Ivan Rakov 

(cherry picked from commit 08f98e3)

commit 3e83d6ac0457811a212bc61e82ae00bddbd3fc52
Author: Pereslegin Pavel 
Date:   2018-07-11T14:25:34Z

IGNITE-7366 Affinity assignment exception in service processor during 
multiple nodes join - Fixes #4321.

Signed-off-by: Ivan Rakov 

(cherry picked from commit efa3269)

commit bc920ccb37f3bd31e98a61fc43c89038f804d12a
Author: Ivan Daschinskiy 
Date:   2018-07-11T14:33:57Z

IGNITE-8945 Stored cache data files corruption when node stops abruptly. - 
Fixes #4319.

Signed-off-by: Ivan Rakov 

(cherry picked from commit fff979a)

commit 86b465b61591b0b00b58c387c03e2cba86eb36e3
Author: Ivan Rakov 
Date:   2018-07-11T14:57:53Z

IGNITE-8965 Add logs in SegmentReservationStorage on exchange process

(cherry picked from commit ee909a3)

commit 02c3419e7d39a0db6ae0dcc6e84c5e9e89b54863
Author: Ivan Rakov 
Date:   2018-07-11T15:45:31Z

IGNITE-8946 AssertionError can occur during release of WAL history that was 
reserved for historical rebalance

(cherry picked from commit 54055ec)

commit 3d727d316c4187c46cf7499255a328a2e265f91d
Author: Alexey Goncharuk 
Date:   2018-07-11T16:50:58Z

IGNITE-8827 Fixed failing test

commit 5c269ebfaf91ce56e8825bf36f41db88fb791cd0
Author: Eduard Shangareev 
Date:   2018-07-11T16:43:19Z

IGNITE-8955 Checkpoint can't get write lock if massive eviction on node 
start started

Signed-off-by: Ivan Rakov 

(cherry picked from commit a0fa79a)

commit 692d8871fc7707fe16aebd6acc340637b9e122f1
Author: Aleksei Scherbakov 
Date:   2018-07-12T09:17:56Z

IGNITE-8863 Tx rollback can cause remote tx hang. - Fixes #4262.

Signed-off-by: Ivan Rakov 

(cherry picked from commit 6440e0c)

commit 8ca5d55ff43db466fba0fddea747fd69bf6fce13
Author: Sergey Chugunov 
Date:   2018-07-11T16:08:56Z

IGNITE-8905 Incorrect assertion in GridDhtPartitionsExchangeFuture - Fixes 
#4288.

Signed-off-by: Ivan Rakov 

(cherry-picked from commit#6ad291d2285726858e67b6ee9b28a14c134247cf)

commit 0f9da8bd157cdb40c8ea010efa3c297182a7d56c
Author: Alexey Goncharuk 
Date:   2018-06-27T08:30:54Z

IGNITE-8768 Fixed javadoc

(cherry picked from commit 67a2aac011d12a46cbd7d16a30f4f48b60f77075)

commit c048b74898450f7bee1c4c9cef475bec17e7d82c
Author: Vitaliy Biryukov 
Date:   2018-07-04T16:00:53Z

IGNITE-8182 Fix for 
ZookeeperDiscoverySpiTest#testRandomTopologyChanges_RestartZk fails on TC. - 
Fixes #4192.

Signed-off-by: Dmitriy Pavlov 
(cherry picked from commit b8653002f5f298173a81ce8c0a829b2affd679a6)

commit 430049a4939e9e5b2a414c65f74effd3a15d251f
Author: AMedvedev 
Date:   2018-07-13T10:51:25Z

IGNITE-8957 testFailGetLock() constantly fails. Last entry checkpoint 
history can be emp - Fixes #4334.

Signed-off-by: Ivan Rakov 

commit 4f7eb97c7d59c7dc7ecb64b3def20cfbd1c9431f
Author: ezagumennov 
Date:   2018-07-13T08:37:19Z

IGNITE-8738 Improved coordinator change information - Fixes #4198.

Signed-off-by: Alexey Goncharuk 
(cherry picked from commit 324e610564637d243155368908964976a771e383)

commit 8a87ecfca2aaac247aa9ca176155f5e41f6c4c5d
Author: Andrey Gura 
Date:   2018-07-05T16:40:26Z

IGNITE-8938 Failure handling for file-decompressor thread added

commit 18584d7d158bc285072fcf82ffeb5583a7fbbada
Author: Ivan Daschinskiy 
Date:   2018-07-16T10:14:00Z

IGNITE-8995 Proper handlig of exceptions from scan query filter and 
transformer

Signed-off-by: Andrey Gura 

commit 1505b240c03e9fb626a6abf0009f37b0f2c105d5
Author: Slava Koptilin 
Date: 

[GitHub] ignite pull request #5394: IGNITE-10058

2018-11-14 Thread xtern
GitHub user xtern opened a pull request:

https://github.com/apache/ignite/pull/5394

IGNITE-10058



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xtern/ignite IGNITE-10058

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5394.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5394


commit 0acea8d6473e97ef558ec30e4f112c9df2406a40
Author: pereslegin-pa 
Date:   2018-11-14T12:56:54Z

IGNITE-10058 Fix draft.




---


[jira] [Created] (IGNITE-10249) TcpDiscoveryMultiThreadedTest#testCustomEventNodeRestart: Getting affinity for topology version earlier than affinity is calculated

2018-11-14 Thread Oleg Ignatenko (JIRA)
Oleg Ignatenko created IGNITE-10249:
---

 Summary: TcpDiscoveryMultiThreadedTest#testCustomEventNodeRestart: 
Getting affinity for topology version earlier than affinity is calculated
 Key: IGNITE-10249
 URL: https://issues.apache.org/jira/browse/IGNITE-10249
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6
Reporter: Oleg Ignatenko


TcpDiscoveryMultiThreadedTest#testCustomEventNodeRestart (in current codebase 
muted by renaming to {{_testCustomEventNodeRestart}}) fails on Teamcity with 
unexpected exception (could not reproduce locally): {noformat}
org.apache.ignite.IgniteException: Getting affinity for topology version 
earlier than affinity is calculated [locNode=TcpDiscoveryNode 
[id=ae293efb-8b41-4b86-b32b-74707081, addrs=ArrayList [127.0.0.1], 
sockAddrs=HashSet [/127.0.0.1:47502], discPort=47502, order=14, intOrder=10, 
lastExchangeTime=1542188985763, loc=true, ver=2.7.0#20181113-sha1:b186327e, 
isClient=false], grp=default, topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], head=AffinityTopologyVersion [topVer=18, minorTopVer=0], 
history=[AffinityTopologyVersion [topVer=14, minorTopVer=0], 
AffinityTopologyVersion [topVer=15, minorTopVer=0], AffinityTopologyVersion 
[topVer=18, minorTopVer=0]]]
Caused by: javax.cache.CacheException: Getting affinity for topology version 
earlier than affinity is calculated [locNode=TcpDiscoveryNode 
[id=ae293efb-8b41-4b86-b32b-74707081, addrs=ArrayList [127.0.0.1], 
sockAddrs=HashSet
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Time to remove automated messages from the devlist?

2018-11-14 Thread Vladimir Ozerov
Igniters,

I would say that "set the filter" is not a solution. First, it is not
always possible technically. E.g. I use GMail and my dev-list emails are
already use a rule. I cannot extract generated emails from overall flow
with GMail capabilities. But the more important things - is why in the
first place someone needs to went through that generated nightmare?

Git messages is a spam. Looks like everyone agrees with that. As far as
JIRA ticket creation - this is all about importance. When someone writes an
email to the devlist, this is likely to be important topic requiring
attention. When someone creates a ticket, most likely this either a bug, or
a piece of already discussed issue, or so. In other words - average devlist
user is likely to be interested in manual messages and is very unlikely to
be interested in "Ticket created" messages. Not important information
overshadows important. Let's continue disucssion this.

As far as Git - what should be done to remove Git messages from the list?

Vladimir.

On Tue, Nov 6, 2018 at 6:49 PM Dmitriy Pavlov  wrote:

> Petr, some manual digest, is probably not needed because Apache list allows
> subscribing to digest. dev-digest-subsr...@ignite.apache.org if I remember
> this correctly.
>
> вт, 6 нояб. 2018 г. в 18:28, Petr Ivanov :
>
> > Can be Jira notifications united in some kind of daily digest?
> > Maybe we can add special filter (new tasks / updates during last 24
> hours)
> > with notification scheme?
> >
> >
> > > On 6 Nov 2018, at 18:15, Dmitriy Pavlov  wrote:
> > >
> > > I should mention I disagree to remove JIRA issues as the first step. It
> > > helps everyone to understand what other people are going to do in the
> > > project.  You always can comment if it is not the best approach, find a
> > > duplicate issue, and you may suggest help.
> > >
> > > PR notification is more or less duplicates JIRA (as 1 JIRA 1..* PR), so
> > it
> > > may be ok to move Git's messages to notificati...@ignite.apache.org
> > > 
> > >
> > > But we should keep JIRA and test failures.
> > >
> > > вт, 6 нояб. 2018 г. в 17:49, Alexey Kuznetsov :
> > >
> > >> Hi!
> > >>
> > >> I have filter for e-mail from JIRA (very useful, I can quick search
> > issue
> > >> there without visiting JIRA).
> > >>
> > >> And I'm just deleting tons of e-mails from GitBox & about PRs.
> > >>
> > >> I don't know what for we need them?
> > >>
> > >> May by we try to move GitBox & PRs-related mails first and see how it
> > goes?
> > >>
> > >> --
> > >> Alexey Kuznetsov
> > >>
> >
> >
>


[jira] [Created] (IGNITE-10250) Ignite Queue hangs after several read/write operations

2018-11-14 Thread Anton Dmitriev (JIRA)
Anton Dmitriev created IGNITE-10250:
---

 Summary: Ignite Queue hangs after several read/write operations
 Key: IGNITE-10250
 URL: https://issues.apache.org/jira/browse/IGNITE-10250
 Project: Ignite
  Issue Type: Bug
  Components: data structures
Affects Versions: 2.7
Reporter: Anton Dmitriev


Ignite Queue hangs after several read/write operations. Code to reproduce:

 

 

 
{noformat}
try (Ignite ignite = Ignition.start()) {
 IgniteQueue queue = ignite.queue("TEST_QUEUE", 1, new 
CollectionConfiguration());
new Thread(() -> {
 for (int i = 0;; i++) {
 queue.put(i);
 System.out.println("Put: " + i);
 }
 }).start();
new Thread(() -> {
 for (int i = 0;; i++) {
 queue.take();
 System.out.println("Take: " + i);
 }
 }).start();
Thread.currentThread().join();
{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5377: IGNITE-10228 Start multiple caches in parallel ma...

2018-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5377


---


[jira] [Created] (IGNITE-10251) Get rid of the code left from times when lateAffinity=false was supported

2018-11-14 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-10251:
--

 Summary: Get rid of the code left from times when 
lateAffinity=false was supported
 Key: IGNITE-10251
 URL: https://issues.apache.org/jira/browse/IGNITE-10251
 Project: Ignite
  Issue Type: Bug
Reporter: Alexei Scherbakov


This code can hide errors and lead to inefficient processing in some scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10253) Merge SqlQuery logic with SqlFieldsQuery

2018-11-14 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-10253:


 Summary: Merge SqlQuery logic with SqlFieldsQuery
 Key: IGNITE-10253
 URL: https://issues.apache.org/jira/browse/IGNITE-10253
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Vladimir Ozerov
Assignee: Vladimir Ozerov
 Fix For: 2.8


Currently execution of {{SqlQuery}} query is very non-trivial. First, it is 
complex to understand. Second, it duplicates code. Third, the most important - 
it is buggy. Because when new logic is added to {{SqlFieldsQuery}} it is not 
added to {{SqlQuery}} with high probability. Moreover, we even have 
discrepancies between local and non-local modes. E.g. it has different value 
conversion logic. 

We need to do the following:
1) Remove all {{SqlQuery}}-specific logic from {{GridQueryProcessor}} and 
{{IgniteH2Indexing}}
2) Make {{SqlQuery}} work as follows: 
- generate {{SqlFieldsQuery}} from {{SqlQuery}}
- execute it
- convert results to K-V pairs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5395: IGNITE-10253

2018-11-14 Thread devozerov
GitHub user devozerov opened a pull request:

https://github.com/apache/ignite/pull/5395

IGNITE-10253



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10253

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5395.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5395


commit f9b0ad32766da4bc93447eef9aba6a864321a276
Author: devozerov 
Date:   2018-11-14T14:26:18Z

Done.

commit 03841bc9caa57711246f435d960beb9808885e4c
Author: devozerov 
Date:   2018-11-14T14:27:13Z

Done.




---


[jira] [Created] (IGNITE-10252) Cache.get() may be mapped to the node with partition state is "MOVING"

2018-11-14 Thread Dmitriy Govorukhin (JIRA)
Dmitriy Govorukhin created IGNITE-10252:
---

 Summary: Cache.get() may be mapped to the node with partition 
state is "MOVING"
 Key: IGNITE-10252
 URL: https://issues.apache.org/jira/browse/IGNITE-10252
 Project: Ignite
  Issue Type: Bug
Reporter: Dmitriy Govorukhin


After implemented IGNITE-5357, in some cases get maybe mapped to the node with 
partition state is "MOVING" for PARTITION cache and it may lead to some 
assertion errors (we do not allow read from moving partitions). In an original 
issue, a talk was about only replicated cache, why it was implemented for 
partition cache, not clear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10254) MVCC: invokeAll may hangs on unstable topology.

2018-11-14 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10254:
-

 Summary: MVCC: invokeAll may hangs on unstable topology.
 Key: IGNITE-10254
 URL: https://issues.apache.org/jira/browse/IGNITE-10254
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Andrew Mashenkov


Reproduces in 
IgniteCacheEntryProcessorNodeJoinTest.testEntryProcessorNodeLeave()

with TRANSACTIONAL_SNAPSHOT cache mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Suggestion to improve deadlock detection

2018-11-14 Thread Павлухин Иван
Hi,

Next part as promised. A working item for me is a deadlock detector
for MVCC transactions [1]. The message is structured in 2 parts. First
is an analysis of the current state of affairs and possible options to
go. Second is a proposed option. First part is going to be not so
short so some might prefer to skip it.

ANALYSIS
The immediate question is "why we cannot use an existing deadlock
detector?". The differences between classic and MVCC transactions
implementation is the answer. Currently a collection of IgniteTxEntry
is used for detection. But such collection is not maintained for MVCC
transactions. So, it will not work out of box.
Also it looks like that current distributed iterative approach cannot
be low latency it the worst case because of doing possibly many
network requests sequentially.
So, what options do we have? Generally we should choose between
centralized and distributed approaches. By centralized approach I mean
existence of a dedicated deadlock detector located on a single node.
In the centralized approach we can face difficulties related to
failover as a node running deadlock detector can fail. In the
distributed approach extra network messaging overhead can strike
because different nodes participating in a deadlock can start
detection independently and send redundant messages. I see some
aspects which make sense for choosing implementation. Here they are
with an approach that is better (roughly speaking) in parentheses:
* Detection latency (centralized).
* Messaging overhead (centralized).
* Failover (distributed).
And also having a park of deadlock detectors sounds not very good. I
hope that it is possible to develop a common solution suitable for
both kinds of transactions. I suggest to pilot new solution with MVCC
and then adopt it for classic transactions.

PROPOSAL
Actually I propose to start with an centralized algorithm described by
Vladimir in the beginning of the thread. I will try to outline main
points of it.
1. Single deadlock detector exists in the cluster which maintains
transaction wait-for graph (WFG).
2. Each cluster node sends and invalidates wait-for edges to the detector.
3. The detector periodically searches cycles in WFG and chooses and
aborts a victim transaction if cycle is found.

Currently I have one fundamental question. Is there a possibility of
false detected deadlocks because of concurrent WFG updates?
Of course there are many points of improvements and optimizations. But
I would like to start from discussing key points.

Please share your thoughts!

[1] https://issues.apache.org/jira/browse/IGNITE-9322
ср, 14 нояб. 2018 г. в 15:47, ipavlukhin :
>
> Hi Igniters,
>
> I would like to resume the discussion about a deadlock detector. I start
> with a motivation for a further work on a subject. As I see current
> implementation (entry point IgniteTxManager.detectDeadlock) starts a
> detection only after a transaction was timed out. In my mind it is not
> very good from a product usability standpoint. As you know, in a
> situation of deadlock some keys become non-usable for an infinite amount
> of time. Currently the only way to work around it is configuring a
> timeout, but it could be rather tricky in practice to choose a
> proper/universal value for it. So, I see the main point as:
>
> Ability to break deadlocks without a need to configure timeouts explicitly.
>
> I will return soon with some thoughts about implementation. Meanwhile,
> does anybody have in mind any other usability points which I am missing?
> Or is there any alternative approaches?
>
> On 2017/11/21 08:32:02, Dmitriy Setrakyan  wrote:
>  > On Mon, Nov 20, 2017 at 10:15 PM, Vladimir Ozerov >
>  > wrote:>
>  >
>  > > It doesn’t need all txes. Instead, other nodes will send info about>
>  > > suspicious txes to it from time to time.>
>  > >>
>  >
>  > I see your point, I think it might work.>
>  >



-- 
Best regards,
Ivan Pavlukhin


[GitHub] ignite pull request #5384: IGNITE-10154 setIgnoreFailureTypes method removed...

2018-11-14 Thread agura
Github user agura closed the pull request at:

https://github.com/apache/ignite/pull/5384


---


[jira] [Created] (IGNITE-10255) Avoid history reservation on affinity change.

2018-11-14 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-10255:
--

 Summary: Avoid history reservation on affinity change.
 Key: IGNITE-10255
 URL: https://issues.apache.org/jira/browse/IGNITE-10255
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexei Scherbakov


Currently WAL history is reserved even if exchange is triggered by affinity 
change message, which means rebalance completed and assignment is ideal.

Reservation is not needed in such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5396: IGNITE-10226 Fixed wrong partition state recovery

2018-11-14 Thread Jokser
GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/5396

IGNITE-10226 Fixed wrong partition state recovery



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10226

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5396.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5396


commit 6fdc692458a63ea79bbfb5d554e6ec46ec8a2b90
Author: Pavel Kovalenko 
Date:   2018-11-14T16:07:04Z

IGNITE-10226 Partition shouldn't rewrite own state to WAL during crash 
recovery.

commit d930bd6185cd0ccb01a9089cb5e384b1797bd36a
Author: Pavel Kovalenko 
Date:   2018-11-14T16:40:03Z

IGNITE-10226 Partition should log to WAL current state on first update




---


[jira] [Created] (IGNITE-10256) Yardstick: output benchmark parameters to HTML report

2018-11-14 Thread Nikolay Izhikov (JIRA)
Nikolay Izhikov created IGNITE-10256:


 Summary: Yardstick: output benchmark parameters to HTML report
 Key: IGNITE-10256
 URL: https://issues.apache.org/jira/browse/IGNITE-10256
 Project: Ignite
  Issue Type: Bug
  Components: yardstick
Affects Versions: 2.6
Reporter: Nikolay Izhikov
 Fix For: 2.8


For now, yardstick doesn't output benchmark parameters to resulting HTML report.
It would be useful to see these parameters into the report.

* benchmark parameter
* jvm parameter
* nodes(server, client) count
* thread count
* etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10257) Control.sh utility should request a SSL keystore password and SSL truststore password if necessary

2018-11-14 Thread Sergey Antonov (JIRA)
Sergey Antonov created IGNITE-10257:
---

 Summary: Control.sh utility should request a SSL keystore password 
and SSL truststore password if necessary
 Key: IGNITE-10257
 URL: https://issues.apache.org/jira/browse/IGNITE-10257
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Antonov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10258) control.sh: make optional parameters order irrelevant

2018-11-14 Thread Sergey Antonov (JIRA)
Sergey Antonov created IGNITE-10258:
---

 Summary: control.sh: make optional parameters order irrelevant
 Key: IGNITE-10258
 URL: https://issues.apache.org/jira/browse/IGNITE-10258
 Project: Ignite
  Issue Type: Improvement
Reporter: Sergey Antonov
Assignee: Sergey Antonov
 Fix For: 2.8


{noformat}
IGNITE_HOME=`pwd` bin/control.sh --cache idle_verify --host 172.25.1.14 --dump
Control utility [ver. 2.5.1-p160#20181113-sha1:5f845ca7]
2018 Copyright(C) Apache Software Foundation
User: mshonichev

Check arguments.
Error: Unexpected argument: --dump{noformat}
{noformat}
IGNITE_HOME=`pwd` bin/control.sh --host 172.25.1.14 --cache idle_verify --dump 
Control utility [ver. 2.5.1-p160#20181113-sha1:5f845ca7] 2018 Copyright(C) 
Apache Software Foundation User: mshonichev 

 VisorIdleVerifyDumpTask successfully written output to 
'/storage/ssd/mshonichev/tiden/pme-181114-163416/test_pme_bench/ignite.server.1/work/idle-dump-2018-11-14T16-46-34_246.txt'
{noformat}
It is quite unexpected and unpleasant that re-ordering optional arguments to 
the --cache idle_verify command makes it throw error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10259) Javadoc online links are not visible

2018-11-14 Thread Cameron Steffen (JIRA)
Cameron Steffen created IGNITE-10259:


 Summary: Javadoc online links are not visible
 Key: IGNITE-10259
 URL: https://issues.apache.org/jira/browse/IGNITE-10259
 Project: Ignite
  Issue Type: Bug
  Components: documentation
Reporter: Cameron Steffen
 Attachments: image-2018-11-14-11-50-24-269.png

Some of the links on the top of the javadoc pages online are not visible. Seems 
to be caused by some images failing to download with 404.

!image-2018-11-14-11-49-49-990.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Suggestion to improve deadlock detection

2018-11-14 Thread Vladimir Ozerov
Ivan,

This is interesting question. I think we should spend some time for formal
verification whether this algorithm works or not. Several articles you may
use as a startitng point: [1], [2]. From what I understand, Ignite fall
into "AND" model, and currently implemented algorithm is a variation of
"edge-chasing" approach as per Chandy, Misra and Haas [3], which is *proven
to be correct* in that it both detects deadlocks when they are present, and
do not produce false positives. But is might be too heavy for the system
under contention.

We need to search for any formal proof of correctness of proposed
algorithm. This area is already researched throughly enough, so we should
be able to find an answer quickly.

Vladimir.

[1] http://www.cse.scu.edu/~jholliday/dd_9_16.htm
[2] https://www.cs.uic.edu/~ajayk/Chapter10.pdf
[3]
https://www.cs.utexas.edu/users/misra/scannedPdf.dir/DistrDeadlockDetection.pdf

On Wed, Nov 14, 2018 at 6:55 PM Павлухин Иван  wrote:

> Hi,
>
> Next part as promised. A working item for me is a deadlock detector
> for MVCC transactions [1]. The message is structured in 2 parts. First
> is an analysis of the current state of affairs and possible options to
> go. Second is a proposed option. First part is going to be not so
> short so some might prefer to skip it.
>
> ANALYSIS
> The immediate question is "why we cannot use an existing deadlock
> detector?". The differences between classic and MVCC transactions
> implementation is the answer. Currently a collection of IgniteTxEntry
> is used for detection. But such collection is not maintained for MVCC
> transactions. So, it will not work out of box.
> Also it looks like that current distributed iterative approach cannot
> be low latency it the worst case because of doing possibly many
> network requests sequentially.
> So, what options do we have? Generally we should choose between
> centralized and distributed approaches. By centralized approach I mean
> existence of a dedicated deadlock detector located on a single node.
> In the centralized approach we can face difficulties related to
> failover as a node running deadlock detector can fail. In the
> distributed approach extra network messaging overhead can strike
> because different nodes participating in a deadlock can start
> detection independently and send redundant messages. I see some
> aspects which make sense for choosing implementation. Here they are
> with an approach that is better (roughly speaking) in parentheses:
> * Detection latency (centralized).
> * Messaging overhead (centralized).
> * Failover (distributed).
> And also having a park of deadlock detectors sounds not very good. I
> hope that it is possible to develop a common solution suitable for
> both kinds of transactions. I suggest to pilot new solution with MVCC
> and then adopt it for classic transactions.
>
> PROPOSAL
> Actually I propose to start with an centralized algorithm described by
> Vladimir in the beginning of the thread. I will try to outline main
> points of it.
> 1. Single deadlock detector exists in the cluster which maintains
> transaction wait-for graph (WFG).
> 2. Each cluster node sends and invalidates wait-for edges to the detector.
> 3. The detector periodically searches cycles in WFG and chooses and
> aborts a victim transaction if cycle is found.
>
> Currently I have one fundamental question. Is there a possibility of
> false detected deadlocks because of concurrent WFG updates?
> Of course there are many points of improvements and optimizations. But
> I would like to start from discussing key points.
>
> Please share your thoughts!
>
> [1] https://issues.apache.org/jira/browse/IGNITE-9322
> ср, 14 нояб. 2018 г. в 15:47, ipavlukhin :
> >
> > Hi Igniters,
> >
> > I would like to resume the discussion about a deadlock detector. I start
> > with a motivation for a further work on a subject. As I see current
> > implementation (entry point IgniteTxManager.detectDeadlock) starts a
> > detection only after a transaction was timed out. In my mind it is not
> > very good from a product usability standpoint. As you know, in a
> > situation of deadlock some keys become non-usable for an infinite amount
> > of time. Currently the only way to work around it is configuring a
> > timeout, but it could be rather tricky in practice to choose a
> > proper/universal value for it. So, I see the main point as:
> >
> > Ability to break deadlocks without a need to configure timeouts
> explicitly.
> >
> > I will return soon with some thoughts about implementation. Meanwhile,
> > does anybody have in mind any other usability points which I am missing?
> > Or is there any alternative approaches?
> >
> > On 2017/11/21 08:32:02, Dmitriy Setrakyan  wrote:
> >  > On Mon, Nov 20, 2017 at 10:15 PM, Vladimir Ozerov  >>
> >  > wrote:>
> >  >
> >  > > It doesn’t need all txes. Instead, other nodes will send info about>
> >  > > suspicious txes to it from time to time.>
> >  > >>
> >  >
> >  > I see your point, I think it 

[jira] [Created] (IGNITE-10260) MVCC: GetAndPutIfAbsent operation result no result.

2018-11-14 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10260:
-

 Summary: MVCC: GetAndPutIfAbsent operation result no result.
 Key: IGNITE-10260
 URL: https://issues.apache.org/jira/browse/IGNITE-10260
 Project: Ignite
  Issue Type: Bug
  Components: cache, mvcc
Reporter: Andrew Mashenkov


Next test (but may be not the only) are failed with Mvcc cache mode:

CachePutIfAbsentTest.testTxConflictGetAndPutIfAbsent()
CacheEnumOperationsSingleNodeTest.testMvccTx()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10261) MVCC: Put opearation hangs if rebalance disabled.

2018-11-14 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10261:
-

 Summary: MVCC: Put opearation hangs if rebalance disabled.
 Key: IGNITE-10261
 URL: https://issues.apache.org/jira/browse/IGNITE-10261
 Project: Ignite
  Issue Type: Bug
Reporter: Andrew Mashenkov


ForceKey response processing fails with ClassCastException with Mvcc mode that 
causes test hanging. See GridCacheDhtPreloadPutGetSelfTest.testPutGetNone1()

 

java.lang.ClassCastException: 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry 
cannot be cast to 
org.apache.ignite.internal.processors.cache.mvcc.MvccVersionAware
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture$MiniFuture.onResult(GridDhtForceKeysFuture.java:545)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture.onResult(GridDhtForceKeysFuture.java:202)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processForceKeyResponse(GridDhtCacheAdapter.java:180)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$11.onMessage(GridDhtTransactionalCacheAdapter.java:208)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$11.onMessage(GridDhtTransactionalCacheAdapter.java:206)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$MessageHandler.apply(GridDhtCacheAdapter.java:1434)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$MessageHandler.apply(GridDhtCacheAdapter.java:1416)
 at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10262) MVCC: Some client operation may hangs if all data nodes leaft the grid.

2018-11-14 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10262:
-

 Summary: MVCC: Some client operation may hangs if all data nodes 
leaft the grid.
 Key: IGNITE-10262
 URL: https://issues.apache.org/jira/browse/IGNITE-10262
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.7
Reporter: Andrew Mashenkov
 Fix For: 2.8


IgniteClientCacheStartFailoverTest.testClientStartLastServerFailsMvccTx() hangs 
forever.

Client put\remove operation should throws CacheServerNotFoundException if there 
is no data server in the grid, but can hangs in some cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10263) MVCC: Concurrent cache stop can cause vacuum failure.

2018-11-14 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10263:
-

 Summary: MVCC: Concurrent cache stop can cause vacuum failure.
 Key: IGNITE-10263
 URL: https://issues.apache.org/jira/browse/IGNITE-10263
 Project: Ignite
  Issue Type: Bug
  Components: cache, mvcc
Reporter: Andrew Mashenkov


The issue can be easily reproduced with IgniteCacheIncrementTxTest in Mvcc mode.

 

Vacuum.cleanup() fails on cctx.gate().enter() if cache is stopped concurrently.

ctx.gate().enter() method fails with IllegalStateException right after readLock 
has been taken.
So, this lock will be never released and prevent writeLock being taken on node 
stop.

Replacing enter() with enterIfNotStopped() resolve the issue, but most likely 
we should release readLock on failure inside gateway.enter().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10264) MVCC: Enlist request failure on backup can cause grid hanging.

2018-11-14 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10264:
-

 Summary: MVCC: Enlist request failure on backup can cause grid 
hanging.
 Key: IGNITE-10264
 URL: https://issues.apache.org/jira/browse/IGNITE-10264
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Andrew Mashenkov


See stacktrace below, runtime ClusterTopologyException has not been catched and 
causes transaction hanging.

Seems, we should throws some meaningful checked exception and thow it onto 
primary node.

 
{noformat}
[2018-11-14 
22:26:37,099][ERROR][sys-stripe-3-#10280%cache.IgniteCacheIncrementTxTest7%][GridCacheIoManager]
 Failed to process message [senderId=3774798b-3cbc-4ae1-95d1-745dd371, 
messageType=class 
o.a.i.i.processors.cache.distributed.dht.GridDhtTxQueryFirstEnlistRequest]
 class org.apache.ignite.cluster.ClusterTopologyException: Can not reserve 
partition. Please retry on stable topology.
 at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.mvccEnlistBatch(IgniteTxHandler.java:1865)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.processDhtTxQueryEnlistRequest(GridDhtTransactionalCacheAdapter.java:2301)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.access$1100(GridDhtTransactionalCacheAdapter.java:112)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$17.apply(GridDhtTransactionalCacheAdapter.java:250)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$17.apply(GridDhtTransactionalCacheAdapter.java:248)
 at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
 at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
 at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
 at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
 at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:100)
 at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1$2$1.run(GridCacheIoManager.java:274)
 at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)
 at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
 at java.lang.Thread.run(Thread.java:748){noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10265) PDOStatement::rowCount returns 0

2018-11-14 Thread Roman Shtykh (JIRA)
Roman Shtykh created IGNITE-10265:
-

 Summary: PDOStatement::rowCount returns 0
 Key: IGNITE-10265
 URL: https://issues.apache.org/jira/browse/IGNITE-10265
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 2.6
 Environment: CentOS, unixODBC
Reporter: Roman Shtykh


How to reproduce:
{quote}{{$ cat ~/odbc.php}}
{{setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);}}
 
{{}}{{$sql}} {{= }}{{'SELECT * FROM "Person".Person'}}{{;}}
{{}}{{$statement}} {{= }}{{$dbh}}{{->prepare(}}{{$sql}}{{);}}
{{}}{{$statement}}{{->execute();}}
 
{{}}{{$data}} {{= }}{{$statement}}{{->fetchAll();}}
 
{{}}{{foreach}}{{(}}{{$data}} {{as}} {{$row}}{{) {}}
{{}}{{var_dump(}}{{$row}}{{);}}
{{}}{{}}}
 
{{}}{{echo}} {{"array Count : "}} {{. }}{{count}}{{(}}{{$data}}{{) . 
}}{{"\n"}}{{;}}
{{}}{{echo}} {{"rowCount : "}} {{. }}{{$statement}}{{->rowCount() . 
}}{{"\n"}}{{;}}
 
{{} }}{{catch}} {{(PDOException }}{{$e}}{{) {}}
{{}}{{print}} {{"Error!: "}} {{. }}{{$e}}{{->getMessage() . }}{{"\n"}}{{;}}
{{}}{{die}}{{();}}
{{}}}
 
{{$ php ~/odbc.php}}
 
{{# Using PDO}}
{{array}}{{(10) {}}
{{  }}{{[}}{{"ORGID"}}{{]=>}}
{{  }}{{string(1) }}{{"1"}}
{{  }}{{[0]=>}}
{{  }}{{string(1) }}{{"1"}}
{{  }}{{[}}{{"FIRSTNAME"}}{{]=>}}
{{  }}{{string(4) }}{{"John"}}
{{  }}{{[1]=>}}
{{  }}{{string(4) }}{{"John"}}
{{  }}{{[}}{{"LASTNAME"}}{{]=>}}
{{  }}{{string(3) }}{{"Doe"}}
{{  }}{{[2]=>}}
{{  }}{{string(3) }}{{"Doe"}}
{{  }}{{[}}{{"RESUME"}}{{]=>}}
{{  }}{{string(14) }}{{"Master Degree."}}
{{  }}{{[3]=>}}
{{  }}{{string(14) }}{{"Master Degree."}}
{{  }}{{[}}{{"SALARY"}}{{]=>}}
{{  }}{{string(4) }}{{"2200"}}
{{  }}{{[4]=>}}
{{  }}{{string(4) }}{{"2200"}}
{{}}}
{{・}}
{{・}}
{{・}}
{{array}}{{(10) {}}
{{  }}{{[}}{{"ORGID"}}{{]=>}}
{{  }}{{string(1) }}{{"2"}}
{{  }}{{[0]=>}}
{{  }}{{string(1) }}{{"2"}}
{{  }}{{[}}{{"FIRSTNAME"}}{{]=>}}
{{  }}{{string(4) }}{{"Mary"}}
{{  }}{{[1]=>}}
{{  }}{{string(4) }}{{"Mary"}}
{{  }}{{[}}{{"LASTNAME"}}{{]=>}}
{{  }}{{string(5) }}{{"Major"}}
{{  }}{{[2]=>}}
{{  }}{{string(5) }}{{"Major"}}
{{  }}{{[}}{{"RESUME"}}{{]=>}}
{{  }}{{string(16) }}{{"Bachelor Degree."}}
{{  }}{{[3]=>}}
{{  }}{{string(16) }}{{"Bachelor Degree."}}
{{  }}{{[}}{{"SALARY"}}{{]=>}}
{{  }}{{string(4) }}{{"1200"}}
{{  }}{{[4]=>}}
{{  }}{{string(4) }}{{"1200"}}
{{}}}
{{array}} {{Count}} {{: 6}}
{{rowCount : 0}}{quote}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5397: IGNITE-9517: Replace uses of ConcurrentHashSet wi...

2018-11-14 Thread shroman
GitHub user shroman opened a pull request:

https://github.com/apache/ignite/pull/5397

IGNITE-9517: Replace uses of ConcurrentHashSet with GridConcurrentHas…

…hSet in tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shroman/ignite IGNITE-9517

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5397.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5397


commit 981afa9ccd53bcd3db5e6e31eac7c63e44aebafb
Author: shroman 
Date:   2018-11-15T06:34:11Z

IGNITE-9517: Replace uses of ConcurrentHashSet with GridConcurrentHashSet 
in tests.




---


[jira] [Created] (IGNITE-10266) VisorCMD: Do not allow to execute collect and reset lost partitions for system cache.

2018-11-14 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-10266:
--

 Summary: VisorCMD: Do not allow to execute collect and reset lost 
partitions for system cache.
 Key: IGNITE-10266
 URL: https://issues.apache.org/jira/browse/IGNITE-10266
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Vasiliy Sisko
Assignee: Vasiliy Sisko


VisorCMD should show valid error on try to show or reset lost partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)