Re: IGNITE-7285 Add default query timeout

2019-04-29 Thread Saikat Maitra
Hi Ivan,

Yes, I checked this CacheQuery default value
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/CacheQuery.java#L200

Also, Andrew recommended the same in review feedback.

https://github.com/apache/ignite/pull/6490#discussion_r277730394

Regards,
Saikat


On Mon, Apr 29, 2019 at 3:18 AM Павлухин Иван  wrote:

> Hi Saikat,
>
> It a compatibility with previous versions the reason for an indefinite
> timeout by default?
>
> сб, 27 апр. 2019 г. в 16:58, Saikat Maitra :
> >
> > Hi Alexey, Ivan, Andrew
> >
> > I think we can provide an option to configure defaultQueryOption at
> > IgniteConfiguration and set the default value = 0 to imply if not set it
> > will be  execute indefinitely but then user can set this value based on
> the
> > application preferences and use it in addition to SQL query timeout.
> >
> > I have updated the PR accordingly.
> >
> > Please review and share if any changes required.
> >
> > Regards,
> > Saikat
> >
> > On Wed, Apr 24, 2019 at 4:33 AM Alexey Kuznetsov 
> > wrote:
> >
> > > Hi Saikat and Ivan,
> > >
> > > I think that properties that related to SQL should not be configured on
> > > caches.
> > > We already put a lot of effort to decouple SQL from caches.
> > >
> > > I think we should develop some kind of "Queries" options on Ignite
> > > configuration.
> > >
> > > What do you think?
> > >
> > >
> > > On Wed, Apr 24, 2019 at 3:22 PM Павлухин Иван 
> wrote:
> > >
> > > > Hi Saikat,
> > > >
> > > > I think that we should have a discussion and choose a place where a
> > > > "default query timeout" property will be configured.
> > > >
> > > > Generally, I think that a real (user) problem is possibility for
> > > > queries to execute indefinitely. And I have no doubts that we can
> > > > improve there. There could be several implementation strategies. One
> > > > is adding a property to CacheConfiguration. But it opens various
> > > > questions. E.g. how should it work if we execute SQL JOIN spanning
> > > > multiple tables (caches)? Also I am concerned about queries executed
> > > > not via cache.query() method. We have multiple alternative options,
> > > > e.g. thin clients (IgniteClient.query) or JDBC. I believe that
> > > > introducing a default timeout for all them is not a bad idea.
> > > >
> > > > What do you think?
> > > >
> > > > вт, 23 апр. 2019 г. в 03:01, Saikat Maitra  >:
> > > > >
> > > > > Hi Ivan,
> > > > >
> > > > > Thank you for your email. My understanding from the jira issue was
> it
> > > > will
> > > > > be cache level configuration for query default timeout.
> > > > >
> > > > > I need more info on the usage for this config property and if it is
> > > > shared
> > > > > in this jira issue I can make changes or if there is a separate
> jira
> > > > issue
> > > > > I can assign myself.
> > > > >
> > > > >
> > > > > Regards,
> > > > > Saikat
> > > > >
> > > > > On Mon, Apr 22, 2019 at 5:31 AM Павлухин Иван  >
> > > > wrote:
> > > > >
> > > > > > Hi Saikat,
> > > > > >
> > > > > > I see that a configuration property is added in PR but I do not
> see
> > > > > > how the property is used. Was it done intentionally?
> > > > > >
> > > > > > Also, we need to decide whether such timeout should be
> configured per
> > > > > > cache or for all caches in one place. For example, we have
> already
> > > > > > TransactionConfiguration#setDefaultTxTimeout which is a global
> one.
> > > > > >
> > > > > > Share you thoughts.
> > > > > >
> > > > > > вс, 21 апр. 2019 г. в 00:38, Saikat Maitra <
> saikat.mai...@gmail.com
> > > >:
> > > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > I have raised a PR for the below issue.
> > > > > > >
> > > > > > > IGNITE-7285 Add default query timeout
> > > > > > >
> > > > > > > PR - https://github.com/apache/ignite/pull/6490
> > > > > > >
> > > > > > > Please take a look and share feedback.
> > > > > > >
> > > > > > > Regards,
> > > > > > > Saikat
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Best regards,
> > > > > > Ivan Pavlukhin
> > > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > > Ivan Pavlukhin
> > > >
> > >
> > >
> > > --
> > > Alexey Kuznetsov
> > >
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


[MTCGA]: new failures in builds [3707580] needs to be handled

2019-04-29 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New stable failure of a flaky test in master 
ConsoleRedirectTest.TestMultipleDomains 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=5435174673577554141&branch=%3Cdefault%3E&tab=testDetails
 No changes in the build

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 21:37:39 29-04-2019 


Re: "Idle verify" to "Online verify"

2019-04-29 Thread Ivan Rakov

But how to keep this hash?

I think, we can just adopt way of storing partition update counters.
Update counters are:
1) Kept and updated in heap, see 
IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#pCntr (accessed during 
regular cache operations, no page replacement latency issues)
2) Synchronized with page memory (and with disk) on every checkpoint, 
see GridCacheOffheapManager#saveStoreMetadata

3) Stored in partition meta page, see PagePartitionMetaIO#setUpdateCounter
4) On node restart, we init onheap counter with value from disk (for the 
moment of last checkpoint) and update it to latest value during WAL 
logical records replay


2) PME is a rare operation on production cluster, but, seems, we have 
to check consistency in a regular way.
Since we have to finish all operations before the check, should we 
have fake PME for maintenance check in this case?
From my experience, PME happens on prod clusters from time to time 
(several times per week), which can be enough. In case it's needed to 
check consistency more often than regular PMEs occur, we can implement 
command that will trigger fake PME for consistency checking.


Best Regards,
Ivan Rakov

On 29.04.2019 18:53, Anton Vinogradov wrote:

Ivan, thanks for the analysis!

>> With having pre-calculated partition hash value, we can 
automatically detect inconsistent partitions on every PME.

Great idea, seems this covers all broken synс cases.

It will check alive nodes in case the primary failed immediately
and will check rejoining node once it finished a rebalance (PME on 
becoming an owner).
Recovered cluster will be checked on activation PME (or even before 
that?).

Also, warmed cluster will be still warmed after check.

Have I missed some cases leads to broken sync except bugs?

1) But how to keep this hash?
- It should be automatically persisted on each checkpoint (it should 
not require recalculation on restore, snapshots should be covered too) 
(and covered by WAL?).
- It should be always available at RAM for every partition (even for 
cold partitions never updated/readed on this node) to be immediately 
used once all operations done on PME.


Can we have special pages to keep such hashes and never allow their 
eviction?


2) PME is a rare operation on production cluster, but, seems, we have 
to check consistency in a regular way.
Since we have to finish all operations before the check, should we 
have fake PME for maintenance check in this case?


On Mon, Apr 29, 2019 at 4:59 PM Ivan Rakov > wrote:


Hi Anton,

Thanks for sharing your ideas.
I think your approach should work in general. I'll just share my
concerns about possible issues that may come up.

1) Equality of update counters doesn't imply equality of
partitions content under load.
For every update, primary node generates update counter and then
update is delivered to backup node and gets applied with the
corresponding update counter. For example, there are two
transactions (A and B) that update partition X by the following
scenario:
- A updates key1 in partition X on primary node and increments
counter to 10
- B updates key2 in partition X on primary node and increments
counter to 11
- While A is still updating another keys, B is finally committed
- Update of key2 arrives to backup node and sets update counter to 11
Observer will see equal update counters (11), but update of key 1
is still missing in the backup partition.
This is a fundamental problem which is being solved here:
https://issues.apache.org/jira/browse/IGNITE-10078
"Online verify" should operate with new complex update counters
which take such "update holes" into account. Otherwise, online
verify may provide false-positive inconsistency reports.

2) Acquisition and comparison of update counters is fast, but
partition hash calculation is long. We should check that update
counter remains unchanged after every K keys handled.

3)


Another hope is that we'll be able to pause/continue scan, for
example, we'll check 1/3 partitions today, 1/3 tomorrow, and in
three days we'll check the whole cluster.

Totally makes sense.
We may find ourselves into a situation where some "hot" partitions
are still unprocessed, and every next attempt to calculate
partition hash fails due to another concurrent update. We should
be able to track progress of validation (% of calculation time
wasted due to concurrent operations may be a good metric, 100% is
the worst case) and provide option to stop/pause activity.
I think, pause should return an "intermediate results report" with
information about which partitions have been successfully checked.
With such report, we can resume activity later: partitions from
report will be just skipped.

4)


Since "Idle verify" uses regular pagmem, I assume it replaces hot
data with persisted.

Re: IgniteDataFrame SparkSQL OR clause return incorrect result

2019-04-29 Thread Nikolay Izhikov
Hello, alex.

Thanks for reporting this.

1. You can file a bug in Ignite Jira.
2. It would be even better if you write a simple self contained reproducer
for your problem.
3. If you turnoff Ignite query optimization issue still reproducible?

пн, 29 апр. 2019 г., 19:29 alexcwyu :

> I am using IgniteDataFrame and using Spark SQL to query the dataframe.
> spark: 2.3.2
> ignite: 2.7.0
>
> I found a bug in SparkSQL while using Ignite.
>
> select count(*) from risk where val_date = '2019-04-26' and portf_id =
> 27315
> -- correctly return 11 row
>
> select count(*) from risk where val_date = '2019-04-26' and portf_id =
> 27315 or portf_id = 14041
> -- correctly return 494 row
>
> select count(*) from risk where val_date = '2019-04-26' and (portf_id =
> 27315 or portf_id = 14041)
> -- expected to return 505 row but it return >7000 row
>
> If I turnoff ignite, the row count with OR clause is correct.
>
> anything I can do to further debug / pinpoint the issue?
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


IgniteDataFrame SparkSQL OR clause return incorrect result

2019-04-29 Thread alexcwyu
I am using IgniteDataFrame and using Spark SQL to query the dataframe.
spark: 2.3.2
ignite: 2.7.0

I found a bug in SparkSQL while using Ignite.

select count(*) from risk where val_date = '2019-04-26' and portf_id =
27315 
-- correctly return 11 row

select count(*) from risk where val_date = '2019-04-26' and portf_id =
27315 or portf_id = 14041
-- correctly return 494 row

select count(*) from risk where val_date = '2019-04-26' and (portf_id =
27315 or portf_id = 14041)
-- expected to return 505 row but it return >7000 row

If I turnoff ignite, the row count with OR clause is correct.

anything I can do to further debug / pinpoint the issue?



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: "Idle verify" to "Online verify"

2019-04-29 Thread Anton Vinogradov
Ivan, thanks for the analysis!

>> With having pre-calculated partition hash value, we can automatically
detect inconsistent partitions on every PME.
Great idea, seems this covers all broken synс cases.

It will check alive nodes in case the primary failed immediately
and will check rejoining node once it finished a rebalance (PME on becoming
an owner).
Recovered cluster will be checked on activation PME (or even before that?).
Also, warmed cluster will be still warmed after check.

Have I missed some cases leads to broken sync except bugs?

1) But how to keep this hash?
- It should be automatically persisted on each checkpoint (it should not
require recalculation on restore, snapshots should be covered too) (and
covered by WAL?).
- It should be always available at RAM for every partition (even for cold
partitions never updated/readed on this node) to be immediately used once
all operations done on PME.

Can we have special pages to keep such hashes and never allow their
eviction?

2) PME is a rare operation on production cluster, but, seems, we have to
check consistency in a regular way.
Since we have to finish all operations before the check, should we have
fake PME for maintenance check in this case?

On Mon, Apr 29, 2019 at 4:59 PM Ivan Rakov  wrote:

> Hi Anton,
>
> Thanks for sharing your ideas.
> I think your approach should work in general. I'll just share my concerns
> about possible issues that may come up.
>
> 1) Equality of update counters doesn't imply equality of partitions
> content under load.
> For every update, primary node generates update counter and then update is
> delivered to backup node and gets applied with the corresponding update
> counter. For example, there are two transactions (A and B) that update
> partition X by the following scenario:
> - A updates key1 in partition X on primary node and increments counter to
> 10
> - B updates key2 in partition X on primary node and increments counter to
> 11
> - While A is still updating another keys, B is finally committed
> - Update of key2 arrives to backup node and sets update counter to 11
> Observer will see equal update counters (11), but update of key 1 is still
> missing in the backup partition.
> This is a fundamental problem which is being solved here:
> https://issues.apache.org/jira/browse/IGNITE-10078
> "Online verify" should operate with new complex update counters which take
> such "update holes" into account. Otherwise, online verify may provide
> false-positive inconsistency reports.
>
> 2) Acquisition and comparison of update counters is fast, but partition
> hash calculation is long. We should check that update counter remains
> unchanged after every K keys handled.
>
> 3)
>
> Another hope is that we'll be able to pause/continue scan, for example,
> we'll check 1/3 partitions today, 1/3 tomorrow, and in three days we'll
> check the whole cluster.
>
> Totally makes sense.
> We may find ourselves into a situation where some "hot" partitions are
> still unprocessed, and every next attempt to calculate partition hash fails
> due to another concurrent update. We should be able to track progress of
> validation (% of calculation time wasted due to concurrent operations may
> be a good metric, 100% is the worst case) and provide option to stop/pause
> activity.
> I think, pause should return an "intermediate results report" with
> information about which partitions have been successfully checked. With
> such report, we can resume activity later: partitions from report will be
> just skipped.
>
> 4)
>
> Since "Idle verify" uses regular pagmem, I assume it replaces hot data
> with persisted.
> So, we have to warm up the cluster after each check.
> Are there any chances to check without cooling the cluster?
>
> I don't see an easy way to achieve it with our page memory architecture.
> We definitely can't just read pages from disk directly: we need to
> synchronize page access with concurrent update operations and checkpoints.
> From my point of view, the correct way to solve this issue is improving
> our page replacement [1] mechanics by making it truly scan-resistant.
>
> P. S. There's another possible way of achieving online verify: instead of
> on-demand hash calculation, we can always keep up-to-date hash value for
> every partition. We'll need to update hash on every insert/update/remove
> operation, but there will be no reordering issues as per function that we
> use for aggregating hash results (+) is commutative. With having
> pre-calculated partition hash value, we can automatically detect
> inconsistent partitions on every PME. What do you think?
>
> [1] -
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Durable+Memory+-+under+the+hood#IgniteDurableMemory-underthehood-Pagereplacement(rotationwithdisk)
>
> Best Regards,
> Ivan Rakov
>
> On 29.04.2019 12:20, Anton Vinogradov wrote:
>
> Igniters and especially Ivan Rakov,
>
> "Idle verify" [1] is a really cool tool, to make sure that cluster is
> consistent.

[jira] [Created] (IGNITE-11823) Client Heap Memory in Cluser

2019-04-29 Thread Venkatesh D J (JIRA)
Venkatesh D J created IGNITE-11823:
--

 Summary: Client Heap Memory in Cluser
 Key: IGNITE-11823
 URL: https://issues.apache.org/jira/browse/IGNITE-11823
 Project: Ignite
  Issue Type: Test
Reporter: Venkatesh D J


Hello,

I've created a cluster with four server nodes(12GB memory each)of 48GB memory 
with 1GB heap.
It contains only one Cache(Partitioned) with a single back and created a client 
node with 14GB heap size.
When i tried to fill the cache fully, i noticed that the server nodes started 
to crash due to out of memory when it reaches around 33-35GB.

Server nodes are crashed even when it has 13-15GB free space was left.
Is the crashed occurred due to the heap size of the client node?
Can you please clarify it?

Please let me know in case of any additional details.
Thanks in Advance

Regards,
Venkatesh D J



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: "Idle verify" to "Online verify"

2019-04-29 Thread Ivan Rakov

Hi Anton,

Thanks for sharing your ideas.
I think your approach should work in general. I'll just share my 
concerns about possible issues that may come up.


1) Equality of update counters doesn't imply equality of partitions 
content under load.
For every update, primary node generates update counter and then update 
is delivered to backup node and gets applied with the corresponding 
update counter. For example, there are two transactions (A and B) that 
update partition X by the following scenario:

- A updates key1 in partition X on primary node and increments counter to 10
- B updates key2 in partition X on primary node and increments counter to 11
- While A is still updating another keys, B is finally committed
- Update of key2 arrives to backup node and sets update counter to 11
Observer will see equal update counters (11), but update of key 1 is 
still missing in the backup partition.
This is a fundamental problem which is being solved here: 
https://issues.apache.org/jira/browse/IGNITE-10078
"Online verify" should operate with new complex update counters which 
take such "update holes" into account. Otherwise, online verify may 
provide false-positive inconsistency reports.


2) Acquisition and comparison of update counters is fast, but partition 
hash calculation is long. We should check that update counter remains 
unchanged after every K keys handled.


3)

Another hope is that we'll be able to pause/continue scan, for 
example, we'll check 1/3 partitions today, 1/3 tomorrow, and in three 
days we'll check the whole cluster.

Totally makes sense.
We may find ourselves into a situation where some "hot" partitions are 
still unprocessed, and every next attempt to calculate partition hash 
fails due to another concurrent update. We should be able to track 
progress of validation (% of calculation time wasted due to concurrent 
operations may be a good metric, 100% is the worst case) and provide 
option to stop/pause activity.
I think, pause should return an "intermediate results report" with 
information about which partitions have been successfully checked. With 
such report, we can resume activity later: partitions from report will 
be just skipped.


4)

Since "Idle verify" uses regular pagmem, I assume it replaces hot data 
with persisted.

So, we have to warm up the cluster after each check.
Are there any chances to check without cooling the cluster?
I don't see an easy way to achieve it with our page memory architecture. 
We definitely can't just read pages from disk directly: we need to 
synchronize page access with concurrent update operations and checkpoints.
From my point of view, the correct way to solve this issue is improving 
our page replacement [1] mechanics by making it truly scan-resistant.


P. S. There's another possible way of achieving online verify: instead 
of on-demand hash calculation, we can always keep up-to-date hash value 
for every partition. We'll need to update hash on every 
insert/update/remove operation, but there will be no reordering issues 
as per function that we use for aggregating hash results (+) is 
commutative. With having pre-calculated partition hash value, we can 
automatically detect inconsistent partitions on every PME. What do you 
think?


[1] - 
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Durable+Memory+-+under+the+hood#IgniteDurableMemory-underthehood-Pagereplacement(rotationwithdisk)


Best Regards,
Ivan Rakov

On 29.04.2019 12:20, Anton Vinogradov wrote:

Igniters and especially Ivan Rakov,

"Idle verify" [1] is a really cool tool, to make sure that cluster is 
consistent.


1) But it required to have operations paused during cluster check.
At some clusters, this check requires hours (3-4 hours at cases I saw).
I've checked the code of "idle verify" and it seems it possible to 
make it "online" with some assumptions.


Idea:
Currently "Idle verify" checks that partitions hashes, generated this way
while (it.hasNextX()) {
CacheDataRow row = it.nextX();
partHash += row.key().hashCode();
partHash += 
Arrays.hashCode(row.value().valueBytes(grpCtx.cacheObjectContext()));

}
, are the same.

What if we'll generate same pairs updateCounter-partitionHash but will 
compare hashes only in case counters are the same?
So, for example, will ask cluster to generate pairs for 64 partitions, 
then will find that 55 have the same counters (was not updated during 
check) and check them.
The rest (64-55 = 9) partitions will be re-requested and rechecked 
with an additional 55.
This way we'll be able to check cluster is consistent even in сase 
operations are in progress (just retrying modified).


Risks and assumptions:
Using this strategy we'll check the cluster's consistency ... 
eventually, and the check will take more time even on an idle cluster.
In case operationsPerTimeToGeneratePartitionHashes > partitionsCount 
we'll definitely gain no progress.

But, in case of the load is not high, we'll be able to check all cluster.

Another hope is 

[jira] [Created] (IGNITE-11822) Wrong DistributedMetaStorage feature id

2019-04-29 Thread Ivan Bessonov (JIRA)
Ivan Bessonov created IGNITE-11822:
--

 Summary: Wrong DistributedMetaStorage feature id
 Key: IGNITE-11822
 URL: https://issues.apache.org/jira/browse/IGNITE-11822
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Bessonov
Assignee: Ivan Bessonov
 Fix For: 2.8






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Configuration for explicit plugins providing.

2019-04-29 Thread Anton Vinogradov
+1 to simplification.

On Mon, Apr 29, 2019 at 3:32 PM Mikhail Petrov 
wrote:

> Hi Igniters!
>
> Now plugin loading is based on presence of plugin class name in
> META-INF/service/org.apache.ignite.plugin.PluginProvider file. I propose to
> add special configuration in IgniteConfiguration for passing needed plugin
> instances explicitly. It provides ability to
> 1. specify plugins via spring configuration.
> 2. avoid storing PluginConfiguration separately from PluginProvider
> (Now all PluginConfiguration are stored as an array in IgniteConfiguration
> and plugin if it's needed in PluginConfiguration merely iterates over all
> configurations) by passing plugin configuration directly into plugin
> provider.
> It's possible to support both of approaches for backward compatibility
> (e.g. if plugins is not set via IgniteConfiguration current approach will
> be used).
>
> PR -- https://github.com/apache/ignite/pull/6517
> 
> JIRA -- https://issues.apache.org/jira/browse/IGNITE-11744
>
> I would like to receive feedback on this feature concept.
>


Re: AI 3.0: writeSynchronizationMode re-thinking

2019-04-29 Thread Anton Vinogradov
Sergey,

I'd like to continue the discussion since it closely linked to the problem
I'm currently working on.

1) writeSynchronizationMode should not be a part of cache configuration,
agree.
This should be up to the user to decide how strong should be "update
guarantee".
So, I propose to have a special cache proxy, .withBlaBla() (at 3.x).

2) Primary fail on !FULL_SYNC is not the single problem leads to an
inconsistent state.
Bugs and incorrect recovery also cause the same problem.

Currently, we have a solution [1] to check cluster to be consistent, but it
has a bad resolution (will tell you only what partitions are broken).
So, to find the broken entries you need some special API, which will check
all copies and let you know what's went wrong.

3) Since we mostly agree that write should affect some backups in sync way,
how about to have similar logic for reading?

So, I propose to have special proxy .withQuorumRead(backupsCnt) which will
check the explicit amount of backups on each read and return you the latest
values.
This proxy already implemented [2] for all copies, but I'm going to extend
it with explicit backups number.

Thoughts?

3.1) Backups can be checked in two ways:
- request data from all backups, but wait for explicit number (solves the
slow backup issue, but produce traffic)
- request data from an explicit number of backups (less traffic, but can be
as slow as all copies check case)
what strategy is better? Should it be configurable?

[1]
https://apacheignite-tools.readme.io/docs/control-script#section-verification-of-partition-checksums
[2] https://issues.apache.org/jira/browse/IGNITE-10663

On Thu, Apr 25, 2019 at 7:04 PM Sergey Kozlov  wrote:

> There's another point to improve:
> if  *syncPartitions=N* comes as the configurable in run-time it will allow
> to manage the consistency-performance balance runtime, e.g. switch to full
> async for preloading and then go to back to full sync for regular
> operations
>
>
> On Thu, Apr 25, 2019 at 6:48 PM Sergey Kozlov 
> wrote:
>
> > Vyacheskav,
> >
> > You're right with the referring to MongoDB doc. In general the idea is
> > very similar. Many vendors use such approach (1).
> >
> > [1]
> >
> https://dev.mysql.com/doc/refman/8.0/en/replication-options-master.html#sysvar_rpl_semi_sync_master_wait_for_slave_count
> >
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Apr 25, 2019 at 6:40 PM Vyacheslav Daradur 
> > wrote:
> >
> >> Hi, Sergey,
> >>
> >> Makes sense to me in case of performance issues, but may lead to losing
> >> data.
> >>
> >> >> *by the new option *syncPartitions=N* (not best name just for
> >> referring)
> >>
> >> Seems similar to "Write Concern"[1] in MongoDB. It is used in the same
> >> way as you described.
> >>
> >> On the other hand, if you have such issues it should be investigated
> >> first: why it causes performance drops: network issues etc.
> >>
> >> [1] https://docs.mongodb.com/manual/reference/write-concern/
> >>
> >> On Thu, Apr 25, 2019 at 6:24 PM Sergey Kozlov 
> >> wrote:
> >> >
> >> > Ilya
> >> >
> >> > See comments inline.
> >> > On Thu, Apr 25, 2019 at 5:11 PM Ilya Kasnacheev <
> >> ilya.kasnach...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hello!
> >> > >
> >> > > When you have 2 backups and N = 1, how will conflicts be resolved?
> >> > >
> >> >
> >> > > Imagine that you had N = 1, and primary node failed immediately
> after
> >> > > operation. Now you have one backup that was updated synchronously
> and
> >> one
> >> > > which did not. Will they stay unsynced, or is there any mechanism of
> >> > > re-syncing?
> >> > >
> >> >
> >> > Same way as Ignite processes the failures for PRIMARY_SYNC.
> >> >
> >> >
> >> > >
> >> > > Why would one want to "update for 1 primary and 1 backup
> >> synchronously,
> >> > > update the rest of backup partitions asynchronously"? What's the use
> >> case?
> >> > >
> >> >
> >> > The case to have more backups but do not pay the performance penalty
> for
> >> > that :)
> >> > For the distributed systems one backup looks like risky. But more
> >> backups
> >> > directly impacts to performance.
> >> > Other point is to split the strict consistent apps like bank apps and
> >> the
> >> > other apps like fraud detection, analytics, reports and so on.
> >> > In that case you can configure partitions distribution by a custom
> >> affinity
> >> > and have following:
> >> >  - first set of nodes for critical (from consistency point) operations
> >> >  - second set of nodes have async backup partitions only for other
> >> > operations (reports, analytics)
> >> >
> >> >
> >> >
> >> > >
> >> > > Regards,
> >> > > --
> >> > > Ilya Kasnacheev
> >> > >
> >> > >
> >> > > чт, 25 апр. 2019 г. в 16:55, Sergey Kozlov :
> >> > >
> >> > > > Igniters
> >> > > >
> >> > > > I'm working with the wide range of cache configurations and found
> >> (from
> >> > > my
> >> > > > standpoint) the interesting point for the discussion:
> >> > > >
> >> > > > Now we have following *writeSynchronizationMode *options:
> >> > > >
> 

Configuration for explicit plugins providing.

2019-04-29 Thread Mikhail Petrov
Hi Igniters!

Now plugin loading is based on presence of plugin class name in
META-INF/service/org.apache.ignite.plugin.PluginProvider file. I propose to
add special configuration in IgniteConfiguration for passing needed plugin
instances explicitly. It provides ability to
1. specify plugins via spring configuration.
2. avoid storing PluginConfiguration separately from PluginProvider
(Now all PluginConfiguration are stored as an array in IgniteConfiguration
and plugin if it's needed in PluginConfiguration merely iterates over all
configurations) by passing plugin configuration directly into plugin
provider.
It's possible to support both of approaches for backward compatibility
(e.g. if plugins is not set via IgniteConfiguration current approach will
be used).

PR -- https://github.com/apache/ignite/pull/6517

JIRA -- https://issues.apache.org/jira/browse/IGNITE-11744

I would like to receive feedback on this feature concept.


[jira] [Created] (IGNITE-11821) Move rebalanceBatchSize and rebalanceBatchesPrefetchCnt to IgniteConfiguration level

2019-04-29 Thread Maxim Muzafarov (JIRA)
Maxim Muzafarov created IGNITE-11821:


 Summary: Move rebalanceBatchSize and rebalanceBatchesPrefetchCnt 
to IgniteConfiguration level
 Key: IGNITE-11821
 URL: https://issues.apache.org/jira/browse/IGNITE-11821
 Project: Ignite
  Issue Type: Improvement
Reporter: Maxim Muzafarov
Assignee: Maxim Muzafarov
 Fix For: 2.8


The set of cluster rebalancing properties below must be maintained and provided 
by {{IgniteConfiguration}}, so an administrator will be able to tune the 
cluster rebalance behaviour depending on used hardware (e.g. the different 
hardware can have different maximum transmission unit (MTU) and it's strongly 
recommended to use specific rebalanceBatchSize for each cluster environment).

Currently, there is no way to change these properties for already created 
persistent caches.

{code:title=CacheConfiguration.java}
/** Rebalance timeout. */
private long rebalanceTimeout = DFLT_REBALANCE_TIMEOUT;

/** Rebalance batch size. */
private int rebalanceBatchSize = DFLT_REBALANCE_BATCH_SIZE;

/** Rebalance batches prefetch count. */
private long rebalanceBatchesPrefetchCnt = 
DFLT_REBALANCE_BATCHES_PREFETCH_COUNT;
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[MTCGA]: new failures in builds [3704547] needs to be handled

2019-04-29 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New stable failure of a flaky test in master 
GridEventConsumeSelfTest.testMultithreadedWithNodeRestart 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=4911099288413140059&branch=%3Cdefault%3E&tab=testDetails
 Changes may lead to failure were done by 
 - slava koptilin  
https://ci.ignite.apache.org/viewModification.html?modId=882094
 - ilya kasnacheev  
https://ci.ignite.apache.org/viewModification.html?modId=882104
 - ipavlukhin  
https://ci.ignite.apache.org/viewModification.html?modId=882907
 - denis garus  
https://ci.ignite.apache.org/viewModification.html?modId=882869
 - edshanggg  
https://ci.ignite.apache.org/viewModification.html?modId=882213
 - andrey v. mashenkov  
https://ci.ignite.apache.org/viewModification.html?modId=882903
 - slava koptilin  
https://ci.ignite.apache.org/viewModification.html?modId=882096

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 14:31:35 29-04-2019 


Re: Brainstorm: Make TC Run All faster

2019-04-29 Thread Nikolay Izhikov
Ivan,

I thought by "faster" we all meant time between One push RunAll button and one 
get results.
What else definition are possible here?

В Пн, 29/04/2019 в 12:04 +0300, Павлухин Иван пишет:
> Nikolay,
> 
> > Why we should imagine it?
> 
> Only in order to understand what do we mean by "faster". My question was:
> 
> 1st approach will take less agent time in sum. 2nd approach will
> complete faster in wall clock time. And your main concern is "total
> agent time". Am I right?
> 
> пн, 29 апр. 2019 г. в 12:01, Nikolay Izhikov :
> > 
> > > Let's imagine that we have an infinite number of agents
> > 
> > Why we should imagine it?
> > We don't have infinite number of agents.
> > 
> > And we have several concurrent Run All.
> > 
> > 
> > В Пн, 29/04/2019 в 11:50 +0300, Павлухин Иван пишет:
> > > Vyacheslav,
> > > 
> > > I finally figured out that "faster" means "total agent time".
> > > 
> > > Let's imagine that we have an infinite number of agents. And 2 approaches:
> > > 1. Uber "Build Apache Ignite" containing all checks.
> > > 2. Separate jobs for compilation, checkstyle and etc.
> > > 
> > > 1st approach will take less agent time in sum. 2nd approach will
> > > complete faster in wall clock time. And your main concern is "total
> > > agent time". Am I right?
> > > 
> > > пн, 29 апр. 2019 г. в 11:42, Vyacheslav Daradur :
> > > > 
> > > > Hi, Ivan,
> > > > 
> > > > We are in the thread "Make TC Run All faster", so the main benefit is
> > > > to make TC faster :)
> > > > 
> > > > Advantages:
> > > > - 1 TC agent required instead of 4;
> > > > - RunAll will be faster, in case of builds queue;
> > > > 
> > > > Also, the "licenses" profile is included in the step of a release
> > > > build. I believe check-style also should be included.
> > > > 
> > > > The generation of Javadocs is an optional step at preparing the
> > > > release, but its check on TC takes significant time in case of the
> > > > separate build.
> > > > 
> > > > > > Returning to "Build Apache Ignite" it seems to me that ideally, it 
> > > > > > can
> > > > 
> > > > be hierarchical.
> > > > 
> > > > I agree, all the checks may be set as a separate step in the build's
> > > > configuration. This helps with the main problem I'm trying to solve -
> > > > resolving of dependencies which takes the most time of the builds.
> > > > 
> > > > On Mon, Apr 29, 2019 at 11:24 AM Павлухин Иван  
> > > > wrote:
> > > > > 
> > > > > Vyacheslav, Maxim,
> > > > > 
> > > > > Can we once again outline what benefits aggregated "Build Apache
> > > > > Ignite" performing various checks has comparing to a modularized
> > > > > approach in which separate builds perform separate tasks?
> > > > > 
> > > > > For example, modularized approach looks nice because it is similar to
> > > > > good practices in software development where we separate
> > > > > responsibilities between different classes instead of aggregating them
> > > > > into a single class. And as usual multiple classes works together
> > > > > coordinating by a class from upper level. So, in fact it is a
> > > > > hierarchical structure.
> > > > > 
> > > > > Returning to "Build Apache Ignite" it seems to me that ideally it can
> > > > > be hierarchical. There is a top level compilation (assembly?) job but
> > > > > it is always clear what tasks does it perform (check style, check
> > > > > license and other subjobs).
> > > > > 
> > > > > пт, 26 апр. 2019 г. в 17:06, Maxim Muzafarov :
> > > > > > 
> > > > > > Folks,
> > > > > > 
> > > > > > +1 for merging all these suites into the single one. All these 
> > > > > > suites
> > > > > > (Build Apache Ignite, Javadoc, Licenses Header, Checkstyle) required
> > > > > > to be `green` all the time. So, we can consider making them a part 
> > > > > > of
> > > > > > build Apache Ignite procedure.
> > > > > > 
> > > > > > Also, I'd suggest going deeper. We can try to merge `Licenses 
> > > > > > Header`
> > > > > > into the `Code style checker` [1]. This will simplify the code
> > > > > > checking process.
> > > > > > 
> > > > > > [1] http://checkstyle.sourceforge.net/config_header.html
> > > > > > 
> > > > > > On Fri, 26 Apr 2019 at 13:17, Vyacheslav Daradur 
> > > > > >  wrote:
> > > > > > > 
> > > > > > > Ivan, you are right, I meant to combine them into one.
> > > > > > > 
> > > > > > > Here is a build [1], with enabled profiles (check-licenses,
> > > > > > > checkstyle) and check of javadoc to show the idea.
> > > > > > > 
> > > > > > > Seems it takes ~15 minutes.
> > > > > > > 
> > > > > > > [1] 
> > > > > > > https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ExperimentalBuildApacheIgniteJavadocLicensesHeaderCheckstyle&branch_IgniteTests24Java8=;;
> > > > > > > 
> > > > > > > On Fri, Apr 26, 2019 at 12:06 PM Павлухин Иван 
> > > > > > >  wrote:
> > > > > > > > 
> > > > > > > > Hi Vyacheslav,
> > > > > > > > 
> > > > > > > > What do you mean by uniting?
> > > > > > > > 
> > > > > > > > For me it looks like [Javadocs] and [Check Code Style] 

"Idle verify" to "Online verify"

2019-04-29 Thread Anton Vinogradov
Igniters and especially Ivan Rakov,

"Idle verify" [1] is a really cool tool, to make sure that cluster is
consistent.

1) But it required to have operations paused during cluster check.
At some clusters, this check requires hours (3-4 hours at cases I saw).
I've checked the code of "idle verify" and it seems it possible to make it
"online" with some assumptions.

Idea:
Currently "Idle verify" checks that partitions hashes, generated this way
while (it.hasNextX()) {
CacheDataRow row = it.nextX();
partHash += row.key().hashCode();
partHash +=
Arrays.hashCode(row.value().valueBytes(grpCtx.cacheObjectContext()));
}
, are the same.

What if we'll generate same pairs updateCounter-partitionHash but will
compare hashes only in case counters are the same?
So, for example, will ask cluster to generate pairs for 64 partitions, then
will find that 55 have the same counters (was not updated during check) and
check them.
The rest (64-55 = 9) partitions will be re-requested and rechecked with an
additional 55.
This way we'll be able to check cluster is consistent even in сase
operations are in progress (just retrying modified).

Risks and assumptions:
Using this strategy we'll check the cluster's consistency ... eventually,
and the check will take more time even on an idle cluster.
In case operationsPerTimeToGeneratePartitionHashes > partitionsCount we'll
definitely gain no progress.
But, in case of the load is not high, we'll be able to check all cluster.

Another hope is that we'll be able to pause/continue scan, for example,
we'll check 1/3 partitions today, 1/3 tomorrow, and in three days we'll
check the whole cluster.

Have I missed something?

2) Since "Idle verify" uses regular pagmem, I assume it replaces hot data
with persisted.
So, we have to warm up the cluster after each check.
Are there any chances to check without cooling the cluster?

[1]
https://apacheignite-tools.readme.io/docs/control-script#section-verification-of-partition-checksums


Re: Brainstorm: Make TC Run All faster

2019-04-29 Thread Павлухин Иван
Nikolay,

> Why we should imagine it?

Only in order to understand what do we mean by "faster". My question was:

1st approach will take less agent time in sum. 2nd approach will
complete faster in wall clock time. And your main concern is "total
agent time". Am I right?

пн, 29 апр. 2019 г. в 12:01, Nikolay Izhikov :
>
> > Let's imagine that we have an infinite number of agents
>
> Why we should imagine it?
> We don't have infinite number of agents.
>
> And we have several concurrent Run All.
>
>
> В Пн, 29/04/2019 в 11:50 +0300, Павлухин Иван пишет:
> > Vyacheslav,
> >
> > I finally figured out that "faster" means "total agent time".
> >
> > Let's imagine that we have an infinite number of agents. And 2 approaches:
> > 1. Uber "Build Apache Ignite" containing all checks.
> > 2. Separate jobs for compilation, checkstyle and etc.
> >
> > 1st approach will take less agent time in sum. 2nd approach will
> > complete faster in wall clock time. And your main concern is "total
> > agent time". Am I right?
> >
> > пн, 29 апр. 2019 г. в 11:42, Vyacheslav Daradur :
> > >
> > > Hi, Ivan,
> > >
> > > We are in the thread "Make TC Run All faster", so the main benefit is
> > > to make TC faster :)
> > >
> > > Advantages:
> > > - 1 TC agent required instead of 4;
> > > - RunAll will be faster, in case of builds queue;
> > >
> > > Also, the "licenses" profile is included in the step of a release
> > > build. I believe check-style also should be included.
> > >
> > > The generation of Javadocs is an optional step at preparing the
> > > release, but its check on TC takes significant time in case of the
> > > separate build.
> > >
> > > > > Returning to "Build Apache Ignite" it seems to me that ideally, it can
> > >
> > > be hierarchical.
> > >
> > > I agree, all the checks may be set as a separate step in the build's
> > > configuration. This helps with the main problem I'm trying to solve -
> > > resolving of dependencies which takes the most time of the builds.
> > >
> > > On Mon, Apr 29, 2019 at 11:24 AM Павлухин Иван  
> > > wrote:
> > > >
> > > > Vyacheslav, Maxim,
> > > >
> > > > Can we once again outline what benefits aggregated "Build Apache
> > > > Ignite" performing various checks has comparing to a modularized
> > > > approach in which separate builds perform separate tasks?
> > > >
> > > > For example, modularized approach looks nice because it is similar to
> > > > good practices in software development where we separate
> > > > responsibilities between different classes instead of aggregating them
> > > > into a single class. And as usual multiple classes works together
> > > > coordinating by a class from upper level. So, in fact it is a
> > > > hierarchical structure.
> > > >
> > > > Returning to "Build Apache Ignite" it seems to me that ideally it can
> > > > be hierarchical. There is a top level compilation (assembly?) job but
> > > > it is always clear what tasks does it perform (check style, check
> > > > license and other subjobs).
> > > >
> > > > пт, 26 апр. 2019 г. в 17:06, Maxim Muzafarov :
> > > > >
> > > > > Folks,
> > > > >
> > > > > +1 for merging all these suites into the single one. All these suites
> > > > > (Build Apache Ignite, Javadoc, Licenses Header, Checkstyle) required
> > > > > to be `green` all the time. So, we can consider making them a part of
> > > > > build Apache Ignite procedure.
> > > > >
> > > > > Also, I'd suggest going deeper. We can try to merge `Licenses Header`
> > > > > into the `Code style checker` [1]. This will simplify the code
> > > > > checking process.
> > > > >
> > > > > [1] http://checkstyle.sourceforge.net/config_header.html
> > > > >
> > > > > On Fri, 26 Apr 2019 at 13:17, Vyacheslav Daradur 
> > > > >  wrote:
> > > > > >
> > > > > > Ivan, you are right, I meant to combine them into one.
> > > > > >
> > > > > > Here is a build [1], with enabled profiles (check-licenses,
> > > > > > checkstyle) and check of javadoc to show the idea.
> > > > > >
> > > > > > Seems it takes ~15 minutes.
> > > > > >
> > > > > > [1] 
> > > > > > https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ExperimentalBuildApacheIgniteJavadocLicensesHeaderCheckstyle&branch_IgniteTests24Java8=;
> > > > > >
> > > > > > On Fri, Apr 26, 2019 at 12:06 PM Павлухин Иван 
> > > > > >  wrote:
> > > > > > >
> > > > > > > Hi Vyacheslav,
> > > > > > >
> > > > > > > What do you mean by uniting?
> > > > > > >
> > > > > > > For me it looks like [Javadocs] and [Check Code Style] are not so 
> > > > > > > time
> > > > > > > consuming comparing to tests, are not they? Do you suggest to 
> > > > > > > combine
> > > > > > > mentioned 4 jobs into one? How long will it run in a such case?
> > > > > > >
> > > > > > > чт, 25 апр. 2019 г. в 10:50, Vyacheslav Daradur 
> > > > > > > :
> > > > > > > >
> > > > > > > > Hi Igniters,
> > > > > > > >
> > > > > > > > At the moment we have several separated test suites:
> > > > > > > > * ~Build Apache Ignite~ _ ~10..20mins
> > > > > > > > * [Javadocs] _

Re: Brainstorm: Make TC Run All faster

2019-04-29 Thread Nikolay Izhikov
> Let's imagine that we have an infinite number of agents

Why we should imagine it?
We don't have infinite number of agents.

And we have several concurrent Run All.


В Пн, 29/04/2019 в 11:50 +0300, Павлухин Иван пишет:
> Vyacheslav,
> 
> I finally figured out that "faster" means "total agent time".
> 
> Let's imagine that we have an infinite number of agents. And 2 approaches:
> 1. Uber "Build Apache Ignite" containing all checks.
> 2. Separate jobs for compilation, checkstyle and etc.
> 
> 1st approach will take less agent time in sum. 2nd approach will
> complete faster in wall clock time. And your main concern is "total
> agent time". Am I right?
> 
> пн, 29 апр. 2019 г. в 11:42, Vyacheslav Daradur :
> > 
> > Hi, Ivan,
> > 
> > We are in the thread "Make TC Run All faster", so the main benefit is
> > to make TC faster :)
> > 
> > Advantages:
> > - 1 TC agent required instead of 4;
> > - RunAll will be faster, in case of builds queue;
> > 
> > Also, the "licenses" profile is included in the step of a release
> > build. I believe check-style also should be included.
> > 
> > The generation of Javadocs is an optional step at preparing the
> > release, but its check on TC takes significant time in case of the
> > separate build.
> > 
> > > > Returning to "Build Apache Ignite" it seems to me that ideally, it can
> > 
> > be hierarchical.
> > 
> > I agree, all the checks may be set as a separate step in the build's
> > configuration. This helps with the main problem I'm trying to solve -
> > resolving of dependencies which takes the most time of the builds.
> > 
> > On Mon, Apr 29, 2019 at 11:24 AM Павлухин Иван  wrote:
> > > 
> > > Vyacheslav, Maxim,
> > > 
> > > Can we once again outline what benefits aggregated "Build Apache
> > > Ignite" performing various checks has comparing to a modularized
> > > approach in which separate builds perform separate tasks?
> > > 
> > > For example, modularized approach looks nice because it is similar to
> > > good practices in software development where we separate
> > > responsibilities between different classes instead of aggregating them
> > > into a single class. And as usual multiple classes works together
> > > coordinating by a class from upper level. So, in fact it is a
> > > hierarchical structure.
> > > 
> > > Returning to "Build Apache Ignite" it seems to me that ideally it can
> > > be hierarchical. There is a top level compilation (assembly?) job but
> > > it is always clear what tasks does it perform (check style, check
> > > license and other subjobs).
> > > 
> > > пт, 26 апр. 2019 г. в 17:06, Maxim Muzafarov :
> > > > 
> > > > Folks,
> > > > 
> > > > +1 for merging all these suites into the single one. All these suites
> > > > (Build Apache Ignite, Javadoc, Licenses Header, Checkstyle) required
> > > > to be `green` all the time. So, we can consider making them a part of
> > > > build Apache Ignite procedure.
> > > > 
> > > > Also, I'd suggest going deeper. We can try to merge `Licenses Header`
> > > > into the `Code style checker` [1]. This will simplify the code
> > > > checking process.
> > > > 
> > > > [1] http://checkstyle.sourceforge.net/config_header.html
> > > > 
> > > > On Fri, 26 Apr 2019 at 13:17, Vyacheslav Daradur  
> > > > wrote:
> > > > > 
> > > > > Ivan, you are right, I meant to combine them into one.
> > > > > 
> > > > > Here is a build [1], with enabled profiles (check-licenses,
> > > > > checkstyle) and check of javadoc to show the idea.
> > > > > 
> > > > > Seems it takes ~15 minutes.
> > > > > 
> > > > > [1] 
> > > > > https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ExperimentalBuildApacheIgniteJavadocLicensesHeaderCheckstyle&branch_IgniteTests24Java8=;
> > > > > 
> > > > > On Fri, Apr 26, 2019 at 12:06 PM Павлухин Иван  
> > > > > wrote:
> > > > > > 
> > > > > > Hi Vyacheslav,
> > > > > > 
> > > > > > What do you mean by uniting?
> > > > > > 
> > > > > > For me it looks like [Javadocs] and [Check Code Style] are not so 
> > > > > > time
> > > > > > consuming comparing to tests, are not they? Do you suggest to 
> > > > > > combine
> > > > > > mentioned 4 jobs into one? How long will it run in a such case?
> > > > > > 
> > > > > > чт, 25 апр. 2019 г. в 10:50, Vyacheslav Daradur 
> > > > > > :
> > > > > > > 
> > > > > > > Hi Igniters,
> > > > > > > 
> > > > > > > At the moment we have several separated test suites:
> > > > > > > * ~Build Apache Ignite~ _ ~10..20mins
> > > > > > > * [Javadocs] _ ~10mins
> > > > > > > * [Licenses Headers] _ ~1min
> > > > > > > * [Check Code Style] _ ~7min
> > > > > > > The most time of each build (except Licenses Headers) is taken by
> > > > > > > dependency resolving.
> > > > > > > 
> > > > > > > Their main goal is a check that the project is built properly.
> > > > > > > 
> > > > > > > Also, profiles of [Javadocs], [Licenses Headers] uses at the step 
> > > > > > > of
> > > > > > > preparing release (see DEVNOTES.txt) that means they are 
> > > > > > > important.
> > > > > 

Re: Brainstorm: Make TC Run All faster

2019-04-29 Thread Павлухин Иван
Vyacheslav,

I finally figured out that "faster" means "total agent time".

Let's imagine that we have an infinite number of agents. And 2 approaches:
1. Uber "Build Apache Ignite" containing all checks.
2. Separate jobs for compilation, checkstyle and etc.

1st approach will take less agent time in sum. 2nd approach will
complete faster in wall clock time. And your main concern is "total
agent time". Am I right?

пн, 29 апр. 2019 г. в 11:42, Vyacheslav Daradur :
>
> Hi, Ivan,
>
> We are in the thread "Make TC Run All faster", so the main benefit is
> to make TC faster :)
>
> Advantages:
> - 1 TC agent required instead of 4;
> - RunAll will be faster, in case of builds queue;
>
> Also, the "licenses" profile is included in the step of a release
> build. I believe check-style also should be included.
>
> The generation of Javadocs is an optional step at preparing the
> release, but its check on TC takes significant time in case of the
> separate build.
>
> >> Returning to "Build Apache Ignite" it seems to me that ideally, it can
> be hierarchical.
>
> I agree, all the checks may be set as a separate step in the build's
> configuration. This helps with the main problem I'm trying to solve -
> resolving of dependencies which takes the most time of the builds.
>
> On Mon, Apr 29, 2019 at 11:24 AM Павлухин Иван  wrote:
> >
> > Vyacheslav, Maxim,
> >
> > Can we once again outline what benefits aggregated "Build Apache
> > Ignite" performing various checks has comparing to a modularized
> > approach in which separate builds perform separate tasks?
> >
> > For example, modularized approach looks nice because it is similar to
> > good practices in software development where we separate
> > responsibilities between different classes instead of aggregating them
> > into a single class. And as usual multiple classes works together
> > coordinating by a class from upper level. So, in fact it is a
> > hierarchical structure.
> >
> > Returning to "Build Apache Ignite" it seems to me that ideally it can
> > be hierarchical. There is a top level compilation (assembly?) job but
> > it is always clear what tasks does it perform (check style, check
> > license and other subjobs).
> >
> > пт, 26 апр. 2019 г. в 17:06, Maxim Muzafarov :
> > >
> > > Folks,
> > >
> > > +1 for merging all these suites into the single one. All these suites
> > > (Build Apache Ignite, Javadoc, Licenses Header, Checkstyle) required
> > > to be `green` all the time. So, we can consider making them a part of
> > > build Apache Ignite procedure.
> > >
> > > Also, I'd suggest going deeper. We can try to merge `Licenses Header`
> > > into the `Code style checker` [1]. This will simplify the code
> > > checking process.
> > >
> > > [1] http://checkstyle.sourceforge.net/config_header.html
> > >
> > > On Fri, 26 Apr 2019 at 13:17, Vyacheslav Daradur  
> > > wrote:
> > > >
> > > > Ivan, you are right, I meant to combine them into one.
> > > >
> > > > Here is a build [1], with enabled profiles (check-licenses,
> > > > checkstyle) and check of javadoc to show the idea.
> > > >
> > > > Seems it takes ~15 minutes.
> > > >
> > > > [1] 
> > > > https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ExperimentalBuildApacheIgniteJavadocLicensesHeaderCheckstyle&branch_IgniteTests24Java8=
> > > >
> > > > On Fri, Apr 26, 2019 at 12:06 PM Павлухин Иван  
> > > > wrote:
> > > > >
> > > > > Hi Vyacheslav,
> > > > >
> > > > > What do you mean by uniting?
> > > > >
> > > > > For me it looks like [Javadocs] and [Check Code Style] are not so time
> > > > > consuming comparing to tests, are not they? Do you suggest to combine
> > > > > mentioned 4 jobs into one? How long will it run in a such case?
> > > > >
> > > > > чт, 25 апр. 2019 г. в 10:50, Vyacheslav Daradur :
> > > > > >
> > > > > > Hi Igniters,
> > > > > >
> > > > > > At the moment we have several separated test suites:
> > > > > > * ~Build Apache Ignite~ _ ~10..20mins
> > > > > > * [Javadocs] _ ~10mins
> > > > > > * [Licenses Headers] _ ~1min
> > > > > > * [Check Code Style] _ ~7min
> > > > > > The most time of each build (except Licenses Headers) is taken by
> > > > > > dependency resolving.
> > > > > >
> > > > > > Their main goal is a check that the project is built properly.
> > > > > >
> > > > > > Also, profiles of [Javadocs], [Licenses Headers] uses at the step of
> > > > > > preparing release (see DEVNOTES.txt) that means they are important.
> > > > > >
> > > > > > I'd suggest uniting the builds, this should reduce the time of tests
> > > > > > on ~15 minutes and releases agents.
> > > > > >
> > > > > > What do you think?
> > > > > >
> > > > > > On Tue, Nov 27, 2018 at 3:56 PM Павлухин Иван  
> > > > > > wrote:
> > > > > > >
> > > > > > > Roman,
> > > > > > >
> > > > > > > Do you have some expectations how faster "correlated" tests
> > > > > > > elimination will make Run All? Also do you have a vision how can 
> > > > > > > we
> > > > > > > determine such "correlated" tests, can we do it relati

Re: [IEP-35] Monitoring & Profiling. Proof of concept

2019-04-29 Thread Nikolay Izhikov
Hello, Vyacheslav.

Thanks for the feedback!

> HttpExposer with Jetty's dependencies should be detached> from the core 
> module.

Agreed. module hierarchy is the essence of the next steps.
For now it just a proof of my ideas for Ignite monitoring we can discuss.

> I like your approach with 'wrapper' for monitored objects, like don't like 
> using 'ServiceConfiguration' directly as a monitored object for services

Agreed in general.
Seems, choosing the right data to expose is the matter of separate discussion 
for each Ignite entities.
I've planned to file tickets for each entity so anyone interested can share his 
vision in it.

> In my opinion, each sensor should have a timestamp.

I'm not sure that *every* sensor should have directly associated timestamp.
Seems, we should support sensors without timestamp for a current monitoring 
numbers at least.

> Also, it'd be great to have an ability to store a list of a fixed size> of 
> last N sensors

What use-cases do you know for such sensors?
We have plans to support fixed size lists to show "Last N SQL queries" or 
similar data.
Essentially, a sensor is just a single value with the name and known meaning.

> It'd be great if you provide a more extended test to show the work of> the 
> system. 

Sorry, for that :)
When you run 'MonitoringSelfTest' you should open 
http://localhost:8080/ignite/monitoring to view exposed info.
I provide this info in gist - 
https://gist.github.com/nizhikov/aa1e6222e6a3456472b881b8deb0e24d

I will extend this test to print results to console in the next iterations - 
stay tuned :)

В Вс, 28/04/2019 в 23:35 +0300, Vyacheslav Daradur пишет:
> Hi, Nikolay,
> 
> I looked through PR and IEP, and I have some comments:
> 
> It would be better to implement it as a separate module, I can't say
> if it is possible for the main part of monitoring or not, but I
> believe that HttpExposer with Jetty's dependencies should be detached
> from the core module.
> 
> I like your approach with 'wrapper' for monitored objects, like
> 'ComputeTaskInfo' in PR, and don't like using 'ServiceConfiguration'
> directly as a monitored object for services. I believe we shouldn't
> mix approaches. It'd be better always use some kind of container with
> monitored object's information to work with such data.
> 
> In my opinion, each sensor should have a timestamp. Usually monitoring
> systems aggregate data and build graphics according to sensors
> timestamp.
> 
> Also, it'd be great to have an ability to store a list of a fixed size
> of last N sensors, not to miss them without pushing to an external
> monitoring system.
> 
> It'd be great if you provide a more extended test to show the work of
> the system. Everybody who looks to PR needs to run the test and get
> the info manually to see the completeness of sensors, this might be
> simplified by proper test.
> 
> Thank you!
> 
> 
> 
> On Fri, Apr 26, 2019 at 5:56 PM Nikolay Izhikov  wrote:
> > 
> > Hello, Igniters.
> > 
> > I've prepared Proof of Concept for IEP-35 [1]
> > PR can be found here - https://github.com/apache/ignite/pull/6510
> > 
> > I've done following changes:
> > 
> > 1. `GridMonitoringManager`  [2] - simple implementation of manager 
> > to store all monitoring info
> > 2. `HttpPullExposerSpi` [3] - pull exposer implementation that can 
> > respond with JSON from http://localhost:8080/ignite/monitoring. JSON 
> > content can be veiwed in gist [4]
> > 3. Compute task start and finish monitoring in "compute" list [5]
> > 4. Service registration are monitored in "service" list - [6]
> > 5. Current `IgniteSpiMBeanAdapter` rewritten using 
> > `GridMonitoringManager` [7]
> > 
> > Design principles, monitoring subsystem details and new Ignite entities can 
> > be found in IEP [1].
> > 
> > My next steps will be:
> > 
> > 1. Implementation of JMX exposer
> > 2. Registration of all "lists" and "sensor groups" as a SQL System 
> > view.
> > 3. Add monitoring for all unmonitoring Ignite API. (described in 
> > IEP).
> > 4. Rewrite existing jmx metrics using GridMonitoringManager.
> > 
> > Please, share you thoughts.
> > 
> > Part of JSON file:
> > ```
> > "COMPUTE": {
> >   "tasks": {
> > "name": "tasks",
> > "rows": [
> >   {
> > "id": "0798817a-eeec-4386-9af7-94edb39ffced",
> > "sessionId": "a1814f95a61-912451ff-ca7b-4764-a7fd-728f6a90",
> > "data": {
> >   "taskClasName": 
> > "org.apache.ignite.monitoring.MonitoringSelfTest$$Lambda$145/1500885480",
> >   "startTime": 1556287337944,
> >   "timeout": 9223372036854776000,
> >   "execName": null
> > },
> > "name": "anotherBroadcast"
> >   }
> > ```
> > 
> > [1] 
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=112820392
> > [2] 
> > https://github.com/apache/ignite/pull/6510/files#diff-ec7d5cf5e35b99303deb9ac

Re: Brainstorm: Make TC Run All faster

2019-04-29 Thread Vyacheslav Daradur
Hi, Ivan,

We are in the thread "Make TC Run All faster", so the main benefit is
to make TC faster :)

Advantages:
- 1 TC agent required instead of 4;
- RunAll will be faster, in case of builds queue;

Also, the "licenses" profile is included in the step of a release
build. I believe check-style also should be included.

The generation of Javadocs is an optional step at preparing the
release, but its check on TC takes significant time in case of the
separate build.

>> Returning to "Build Apache Ignite" it seems to me that ideally, it can
be hierarchical.

I agree, all the checks may be set as a separate step in the build's
configuration. This helps with the main problem I'm trying to solve -
resolving of dependencies which takes the most time of the builds.

On Mon, Apr 29, 2019 at 11:24 AM Павлухин Иван  wrote:
>
> Vyacheslav, Maxim,
>
> Can we once again outline what benefits aggregated "Build Apache
> Ignite" performing various checks has comparing to a modularized
> approach in which separate builds perform separate tasks?
>
> For example, modularized approach looks nice because it is similar to
> good practices in software development where we separate
> responsibilities between different classes instead of aggregating them
> into a single class. And as usual multiple classes works together
> coordinating by a class from upper level. So, in fact it is a
> hierarchical structure.
>
> Returning to "Build Apache Ignite" it seems to me that ideally it can
> be hierarchical. There is a top level compilation (assembly?) job but
> it is always clear what tasks does it perform (check style, check
> license and other subjobs).
>
> пт, 26 апр. 2019 г. в 17:06, Maxim Muzafarov :
> >
> > Folks,
> >
> > +1 for merging all these suites into the single one. All these suites
> > (Build Apache Ignite, Javadoc, Licenses Header, Checkstyle) required
> > to be `green` all the time. So, we can consider making them a part of
> > build Apache Ignite procedure.
> >
> > Also, I'd suggest going deeper. We can try to merge `Licenses Header`
> > into the `Code style checker` [1]. This will simplify the code
> > checking process.
> >
> > [1] http://checkstyle.sourceforge.net/config_header.html
> >
> > On Fri, 26 Apr 2019 at 13:17, Vyacheslav Daradur  
> > wrote:
> > >
> > > Ivan, you are right, I meant to combine them into one.
> > >
> > > Here is a build [1], with enabled profiles (check-licenses,
> > > checkstyle) and check of javadoc to show the idea.
> > >
> > > Seems it takes ~15 minutes.
> > >
> > > [1] 
> > > https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ExperimentalBuildApacheIgniteJavadocLicensesHeaderCheckstyle&branch_IgniteTests24Java8=
> > >
> > > On Fri, Apr 26, 2019 at 12:06 PM Павлухин Иван  
> > > wrote:
> > > >
> > > > Hi Vyacheslav,
> > > >
> > > > What do you mean by uniting?
> > > >
> > > > For me it looks like [Javadocs] and [Check Code Style] are not so time
> > > > consuming comparing to tests, are not they? Do you suggest to combine
> > > > mentioned 4 jobs into one? How long will it run in a such case?
> > > >
> > > > чт, 25 апр. 2019 г. в 10:50, Vyacheslav Daradur :
> > > > >
> > > > > Hi Igniters,
> > > > >
> > > > > At the moment we have several separated test suites:
> > > > > * ~Build Apache Ignite~ _ ~10..20mins
> > > > > * [Javadocs] _ ~10mins
> > > > > * [Licenses Headers] _ ~1min
> > > > > * [Check Code Style] _ ~7min
> > > > > The most time of each build (except Licenses Headers) is taken by
> > > > > dependency resolving.
> > > > >
> > > > > Their main goal is a check that the project is built properly.
> > > > >
> > > > > Also, profiles of [Javadocs], [Licenses Headers] uses at the step of
> > > > > preparing release (see DEVNOTES.txt) that means they are important.
> > > > >
> > > > > I'd suggest uniting the builds, this should reduce the time of tests
> > > > > on ~15 minutes and releases agents.
> > > > >
> > > > > What do you think?
> > > > >
> > > > > On Tue, Nov 27, 2018 at 3:56 PM Павлухин Иван  
> > > > > wrote:
> > > > > >
> > > > > > Roman,
> > > > > >
> > > > > > Do you have some expectations how faster "correlated" tests
> > > > > > elimination will make Run All? Also do you have a vision how can we
> > > > > > determine such "correlated" tests, can we do it relatively fast?
> > > > > >
> > > > > > But all in all, I am not sure that reducing a group of correlated
> > > > > > tests to only one test can show good stability.
> > > > > > пн, 26 нояб. 2018 г. в 17:48, aplatonov :
> > > > > > >
> > > > > > > It should be noticed that additional parameter TEST_SCALE_FACTOR 
> > > > > > > was added.
> > > > > > > This parameter with ScaleFactorUtil methods can be used for test 
> > > > > > > size
> > > > > > > scaling for different runs (like ordinary and nightly RunALLs). 
> > > > > > > If someone
> > > > > > > want to distinguish these builds he/she can apply scaling methods 
> > > > > > > from
> > > > > > > ScaleFactorUtil in own tests. For nightly test 
> >

Re: Brainstorm: Make TC Run All faster

2019-04-29 Thread Павлухин Иван
Vyacheslav, Maxim,

Can we once again outline what benefits aggregated "Build Apache
Ignite" performing various checks has comparing to a modularized
approach in which separate builds perform separate tasks?

For example, modularized approach looks nice because it is similar to
good practices in software development where we separate
responsibilities between different classes instead of aggregating them
into a single class. And as usual multiple classes works together
coordinating by a class from upper level. So, in fact it is a
hierarchical structure.

Returning to "Build Apache Ignite" it seems to me that ideally it can
be hierarchical. There is a top level compilation (assembly?) job but
it is always clear what tasks does it perform (check style, check
license and other subjobs).

пт, 26 апр. 2019 г. в 17:06, Maxim Muzafarov :
>
> Folks,
>
> +1 for merging all these suites into the single one. All these suites
> (Build Apache Ignite, Javadoc, Licenses Header, Checkstyle) required
> to be `green` all the time. So, we can consider making them a part of
> build Apache Ignite procedure.
>
> Also, I'd suggest going deeper. We can try to merge `Licenses Header`
> into the `Code style checker` [1]. This will simplify the code
> checking process.
>
> [1] http://checkstyle.sourceforge.net/config_header.html
>
> On Fri, 26 Apr 2019 at 13:17, Vyacheslav Daradur  wrote:
> >
> > Ivan, you are right, I meant to combine them into one.
> >
> > Here is a build [1], with enabled profiles (check-licenses,
> > checkstyle) and check of javadoc to show the idea.
> >
> > Seems it takes ~15 minutes.
> >
> > [1] 
> > https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ExperimentalBuildApacheIgniteJavadocLicensesHeaderCheckstyle&branch_IgniteTests24Java8=
> >
> > On Fri, Apr 26, 2019 at 12:06 PM Павлухин Иван  wrote:
> > >
> > > Hi Vyacheslav,
> > >
> > > What do you mean by uniting?
> > >
> > > For me it looks like [Javadocs] and [Check Code Style] are not so time
> > > consuming comparing to tests, are not they? Do you suggest to combine
> > > mentioned 4 jobs into one? How long will it run in a such case?
> > >
> > > чт, 25 апр. 2019 г. в 10:50, Vyacheslav Daradur :
> > > >
> > > > Hi Igniters,
> > > >
> > > > At the moment we have several separated test suites:
> > > > * ~Build Apache Ignite~ _ ~10..20mins
> > > > * [Javadocs] _ ~10mins
> > > > * [Licenses Headers] _ ~1min
> > > > * [Check Code Style] _ ~7min
> > > > The most time of each build (except Licenses Headers) is taken by
> > > > dependency resolving.
> > > >
> > > > Their main goal is a check that the project is built properly.
> > > >
> > > > Also, profiles of [Javadocs], [Licenses Headers] uses at the step of
> > > > preparing release (see DEVNOTES.txt) that means they are important.
> > > >
> > > > I'd suggest uniting the builds, this should reduce the time of tests
> > > > on ~15 minutes and releases agents.
> > > >
> > > > What do you think?
> > > >
> > > > On Tue, Nov 27, 2018 at 3:56 PM Павлухин Иван  
> > > > wrote:
> > > > >
> > > > > Roman,
> > > > >
> > > > > Do you have some expectations how faster "correlated" tests
> > > > > elimination will make Run All? Also do you have a vision how can we
> > > > > determine such "correlated" tests, can we do it relatively fast?
> > > > >
> > > > > But all in all, I am not sure that reducing a group of correlated
> > > > > tests to only one test can show good stability.
> > > > > пн, 26 нояб. 2018 г. в 17:48, aplatonov :
> > > > > >
> > > > > > It should be noticed that additional parameter TEST_SCALE_FACTOR 
> > > > > > was added.
> > > > > > This parameter with ScaleFactorUtil methods can be used for test 
> > > > > > size
> > > > > > scaling for different runs (like ordinary and nightly RunALLs). If 
> > > > > > someone
> > > > > > want to distinguish these builds he/she can apply scaling methods 
> > > > > > from
> > > > > > ScaleFactorUtil in own tests. For nightly test 
> > > > > > TEST_SCALE_FACTOR=1.0, for
> > > > > > non-nightly builds TEST_SCALE_FACTOR<1.0. For example in
> > > > > > GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was 
> > > > > > used for
> > > > > > scaling count of iterations. I guess that TEST_SCALE_FACTOR support 
> > > > > > will be
> > > > > > added to runs at the same time with RunALL (nightly) runs.
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards,
> > > > > Ivan Pavlukhin
> > > >
> > > >
> > > >
> > > > --
> > > > Best Regards, Vyacheslav D.
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Ivan Pavlukhin
> >
> >
> >
> > --
> > Best Regards, Vyacheslav D.



-- 
Best regards,
Ivan Pavlukhin


Re: IGNITE-7285 Add default query timeout

2019-04-29 Thread Павлухин Иван
Hi Saikat,

It a compatibility with previous versions the reason for an indefinite
timeout by default?

сб, 27 апр. 2019 г. в 16:58, Saikat Maitra :
>
> Hi Alexey, Ivan, Andrew
>
> I think we can provide an option to configure defaultQueryOption at
> IgniteConfiguration and set the default value = 0 to imply if not set it
> will be  execute indefinitely but then user can set this value based on the
> application preferences and use it in addition to SQL query timeout.
>
> I have updated the PR accordingly.
>
> Please review and share if any changes required.
>
> Regards,
> Saikat
>
> On Wed, Apr 24, 2019 at 4:33 AM Alexey Kuznetsov 
> wrote:
>
> > Hi Saikat and Ivan,
> >
> > I think that properties that related to SQL should not be configured on
> > caches.
> > We already put a lot of effort to decouple SQL from caches.
> >
> > I think we should develop some kind of "Queries" options on Ignite
> > configuration.
> >
> > What do you think?
> >
> >
> > On Wed, Apr 24, 2019 at 3:22 PM Павлухин Иван  wrote:
> >
> > > Hi Saikat,
> > >
> > > I think that we should have a discussion and choose a place where a
> > > "default query timeout" property will be configured.
> > >
> > > Generally, I think that a real (user) problem is possibility for
> > > queries to execute indefinitely. And I have no doubts that we can
> > > improve there. There could be several implementation strategies. One
> > > is adding a property to CacheConfiguration. But it opens various
> > > questions. E.g. how should it work if we execute SQL JOIN spanning
> > > multiple tables (caches)? Also I am concerned about queries executed
> > > not via cache.query() method. We have multiple alternative options,
> > > e.g. thin clients (IgniteClient.query) or JDBC. I believe that
> > > introducing a default timeout for all them is not a bad idea.
> > >
> > > What do you think?
> > >
> > > вт, 23 апр. 2019 г. в 03:01, Saikat Maitra :
> > > >
> > > > Hi Ivan,
> > > >
> > > > Thank you for your email. My understanding from the jira issue was it
> > > will
> > > > be cache level configuration for query default timeout.
> > > >
> > > > I need more info on the usage for this config property and if it is
> > > shared
> > > > in this jira issue I can make changes or if there is a separate jira
> > > issue
> > > > I can assign myself.
> > > >
> > > >
> > > > Regards,
> > > > Saikat
> > > >
> > > > On Mon, Apr 22, 2019 at 5:31 AM Павлухин Иван 
> > > wrote:
> > > >
> > > > > Hi Saikat,
> > > > >
> > > > > I see that a configuration property is added in PR but I do not see
> > > > > how the property is used. Was it done intentionally?
> > > > >
> > > > > Also, we need to decide whether such timeout should be configured per
> > > > > cache or for all caches in one place. For example, we have already
> > > > > TransactionConfiguration#setDefaultTxTimeout which is a global one.
> > > > >
> > > > > Share you thoughts.
> > > > >
> > > > > вс, 21 апр. 2019 г. в 00:38, Saikat Maitra  > >:
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I have raised a PR for the below issue.
> > > > > >
> > > > > > IGNITE-7285 Add default query timeout
> > > > > >
> > > > > > PR - https://github.com/apache/ignite/pull/6490
> > > > > >
> > > > > > Please take a look and share feedback.
> > > > > >
> > > > > > Regards,
> > > > > > Saikat
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards,
> > > > > Ivan Pavlukhin
> > > > >
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Ivan Pavlukhin
> > >
> >
> >
> > --
> > Alexey Kuznetsov
> >



-- 
Best regards,
Ivan Pavlukhin


[MTCGA]: new failures in builds [3703685] needs to be handled

2019-04-29 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New stable failure of a flaky test in master-nightly 
GridVersionSelfTest.testVersions 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=4054559991137027765&branch=%3Cdefault%3E&tab=testDetails
 Changes may lead to failure were done by 
 - slava koptilin  
https://ci.ignite.apache.org/viewModification.html?modId=882094
 - ilya kasnacheev  
https://ci.ignite.apache.org/viewModification.html?modId=882104
 - ipavlukhin  
https://ci.ignite.apache.org/viewModification.html?modId=882907
 - denis garus  
https://ci.ignite.apache.org/viewModification.html?modId=882869
 - edshanggg  
https://ci.ignite.apache.org/viewModification.html?modId=882213
 - andrey v. mashenkov  
https://ci.ignite.apache.org/viewModification.html?modId=882903
 - slava koptilin  
https://ci.ignite.apache.org/viewModification.html?modId=882096

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 10:46:35 29-04-2019