[jira] [Created] (IGNITE-13140) Incorrect example in Pull Request checklist: comma after ticket name in commit message

2020-06-09 Thread Andrey N. Gura (Jira)
Andrey N. Gura created IGNITE-13140:
---

 Summary: Incorrect example in Pull Request checklist: comma after 
ticket name in commit message
 Key: IGNITE-13140
 URL: https://issues.apache.org/jira/browse/IGNITE-13140
 Project: Ignite
  Issue Type: Bug
Reporter: Andrey N. Gura
Assignee: Andrey N. Gura


Historically commit message pattern always was IGNITE- Description (without 
:). It could be observed in git commits history. Also description of 
contribution process never contains such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13139) exception when closing Ignite at program end

2020-06-09 Thread Tomasz Grygo (Jira)
Tomasz Grygo created IGNITE-13139:
-

 Summary: exception when closing Ignite at program end
 Key: IGNITE-13139
 URL: https://issues.apache.org/jira/browse/IGNITE-13139
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.8.1
 Environment: Java 1.8.0_231
Windows 10


Reporter: Tomasz Grygo
 Attachments: ignite.config.xml

Exception when closing Ignite at program end using
ignite.close();

2020-06-09 15:07:44,102 [tcp-disco-srvr-[:47500]-#3] [ERROR] - Failed to accept 
TCP connection.
java.net.SocketException: socket closed
at java.net.DualStackPlainSocketImpl.accept0(Native Method)
at 
java.net.DualStackPlainSocketImpl.socketAccept(DualStackPlainSocketImpl.java:131)
at 
java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:199)
at java.net.ServerSocket.implAccept(ServerSocket.java:545)
at java.net.ServerSocket.accept(ServerSocket.java:513)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:6353)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServerThread.body(ServerImpl.java:6276)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:61)
[15:08:30] Ignite node stopped OK [uptime=00:03:50.796]




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[MTCGA]: new failures in builds [5364374] needs to be handled

2020-06-09 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New test failure in master 
IgniteCacheQueryH2IndexingLeakTest.testLeaksInIgniteH2IndexingOnTerminatedThread
 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-7378154946319099847=%3Cdefault%3E=testDetails
 Changes may lead to failure were done by 
 - ilya kasnacheev  
https://ci.ignite.apache.org/viewModification.html?modId=902687
 - pavel tupitsyn  
https://ci.ignite.apache.org/viewModification.html?modId=902683
 - nikita amelchev  
https://ci.ignite.apache.org/viewModification.html?modId=902690

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 23:01:31 09-06-2020 


Not able to connect multiple slaves to Master

2020-06-09 Thread imran kazi
Hi Team,
I am deploying ignite clustered application on distributed servers.

This works fine for me in local able to add many nodes.

But I am deploying this on AWS where limited ports are open between these
servers,  I already open
47500-47520,47100-47120,48100-48120,49500-49520,48500-48520.

Now for this senario. when I am starting master on one server and 1 salve
on another server its getting connected. But second slave trying to start
from another server is not starting and getting connected to master. Second
slave get connected when I stops 1st slave .

So at a time am able to add only one slave.


Can you please help me.

Thanks & Regards,
Imran


IGNITE-13133: Add integration with QuerydslPredicateExecutor for spring-data integrations

2020-06-09 Thread Sergey Moldachev
Hello Igniters,

I've created the PR for this ticket with related changes in
ignite-spring-data-2.0 and ignite-spring-data-2.2, also I added some
examples in ignite-examples.

I didn't find a maintainer of this component. Who can review my PR?

With best regards,
Sergey.


Re: Various shutdown guaranties

2020-06-09 Thread Alexei Scherbakov
+1, this is exactly what I want.

I'm fine with either IMMEDIATE or DEFAULT.

вт, 9 июн. 2020 г. в 19:41, Ivan Rakov :

> Vlad,
>
> +1, that's what I mean.
> We don't need either  or dedicated USE_STATIC_CONFIGURATION in case
> the user will be able to retrieve current shutdown policy and apply the one
> he needs.
> My only requirement is that ignite.cluster().getShutdownPolicy() should
> return a statically configured value {@link
> IgniteConfiguration#shutdownPolicy} in case no override has been specified.
> So, static configuration will be applied only on cluster start, like it
> currently works for SQL schemas.
>
> On Tue, Jun 9, 2020 at 7:09 PM V.Pyatkov  wrote:
>
> > Hi,
> >
> > ignite.cluster().setShutdownPolicy(null); // Clear dynamic value and
> switch
> > to statically configured.
> >
> > I do not understand why we need it. if user want to change configuration
> to
> > any other value he set it explicitly.
> > We can to add warning on start when static option does not math to
> dynamic
> > (dynamic always prefer if it initiated).
> >
> > shutdownPolicy=IMMEDIATE|GRACEFUL
> >
> > Looks better that DEFAULT and WAIT_FOR_BACKUP.
> >
> > I general I consider job cancellation need to added in these policies'
> > enumeration.
> > But we can do it in the future.
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> >
>


-- 

Best regards,
Alexei Scherbakov


Re: Various shutdown guaranties

2020-06-09 Thread Ivan Rakov
Vlad,

+1, that's what I mean.
We don't need either  or dedicated USE_STATIC_CONFIGURATION in case
the user will be able to retrieve current shutdown policy and apply the one
he needs.
My only requirement is that ignite.cluster().getShutdownPolicy() should
return a statically configured value {@link
IgniteConfiguration#shutdownPolicy} in case no override has been specified.
So, static configuration will be applied only on cluster start, like it
currently works for SQL schemas.

On Tue, Jun 9, 2020 at 7:09 PM V.Pyatkov  wrote:

> Hi,
>
> ignite.cluster().setShutdownPolicy(null); // Clear dynamic value and switch
> to statically configured.
>
> I do not understand why we need it. if user want to change configuration to
> any other value he set it explicitly.
> We can to add warning on start when static option does not math to dynamic
> (dynamic always prefer if it initiated).
>
> shutdownPolicy=IMMEDIATE|GRACEFUL
>
> Looks better that DEFAULT and WAIT_FOR_BACKUP.
>
> I general I consider job cancellation need to added in these policies'
> enumeration.
> But we can do it in the future.
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: Apache Ignite 2.9.0 RELEASE [Time, Scope, Manager]

2020-06-09 Thread Ivan Rakov
Hi,

Indeed, the tracing feature is almost ready. Discovery, communication and
transactions tracing will be introduced, as well as an option to configure
tracing in runtime. Right now we are working on final performance
optimizations, but it's very likely that we'll complete this activity
before the code freeze date.
Let's include tracing to the 2.9 release scope.

More info:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-48%3A+Tracing
https://issues.apache.org/jira/browse/IGNITE-13060

--
Best Regards,
Ivan Rakov

On Sat, Jun 6, 2020 at 4:30 PM Denis Magda  wrote:

> Hi folks,
>
> The timelines proposed by Alex Plekhanov sounds reasonable to me. I'd like
> only to hear inputs of @Ivan Rakov , who is about to
> finish with the tracing support, and @Ivan Bessonov
> , who is fixing a serious limitation for K8
> deployments [1]. Most likely, both features will be ready by the code
> freeze date (July 10), but the guys should know it better.
>
> [1]
> http://apache-ignite-developers.2346864.n4.nabble.com/DISCUSSION-New-Ignite-settings-for-IGNITE-12438-and-IGNITE-13013-td47586.html
>
> -
> Denis
>
>
> On Wed, Jun 3, 2020 at 4:45 AM Alex Plehanov 
> wrote:
>
>> Hello Igniters,
>>
>> AI 2.8.1 is finally released and as we discussed here [1] its time to
>> start
>> the discussion about 2.9 release.
>>
>> I want to propose myself to be the release manager of the 2.9 release.
>>
>> What about release time, I agree with Maxim that we should deliver
>> features
>> as frequently as possible. If some feature doesn't fit into release dates
>> we should better include it into the next release and schedule the next
>> release earlier then postpone the current release.
>>
>> I propose the following dates for 2.9 release:
>>
>> Scope Freeze: June 26, 2020
>> Code Freeze: July 10, 2020
>> Voting Date: July 31, 2020
>> Release Date: August 7, 2019
>>
>> WDYT?
>>
>> [1] :
>>
>> http://apache-ignite-developers.2346864.n4.nabble.com/Ignite-Releases-Plan-td47360.html#a47575
>>
>


Re: Partitioned mode issue

2020-06-09 Thread Ilya Kasnacheev
Hello!

Please consider moving to user list.

If you have backups configured you should not lose information when server
is down. Otherwise, it's hard to say what's going on. Do you have a
reproducer?

Regards.
-- 
Ilya Kasnacheev


вт, 9 июн. 2020 г. в 13:55, Toni Mateu Sanchez <
antonio.ma...@globalia-corp.com>:

> I have this scenario:
>
> 1 Cluster with 2 server nodes and 4 caches configured with partitioned
> mode.
>
> With this settings, if I try to recover the data with a client node I lose
> some information, but if I have the same scenario with just 1 server node,
> I
> recover all data.
>
> Why? Some miss settings?
>
> Thanks in advance.
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: Various shutdown guaranties

2020-06-09 Thread V.Pyatkov
Hi,

ignite.cluster().setShutdownPolicy(null); // Clear dynamic value and switch
to statically configured.

I do not understand why we need it. if user want to change configuration to
any other value he set it explicitly.
We can to add warning on start when static option does not math to dynamic
(dynamic always prefer if it initiated).

shutdownPolicy=IMMEDIATE|GRACEFUL

Looks better that DEFAULT and WAIT_FOR_BACKUP.

I general I consider job cancellation need to added in these policies'
enumeration.
But we can do it in the future.



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: [DISCUSSION] Ignite integration testing framework.

2020-06-09 Thread Max A. Shonichev

Greetings, Nikolay,

First of all, thank you for you great effort preparing PoC of 
integration testing to Ignite community.


It’s a shame Ignite did not have at least some such tests yet, however, 
GridGain, as a major contributor to Apache Ignite had a profound 
collection of in-house tools to perform integration and performance 
testing for years already and while we slowly consider sharing our 
expertise with the community, your initiative makes us drive that 
process a bit faster, thanks a lot!


I reviewed your PoC and want to share a little about what we do on our 
part, why and how, hope it would help community take proper course.


First I’ll do a brief overview of what decisions we made and what we do 
have in our private code base, next I’ll describe what we have already 
donated to the public and what we plan public next, then I’ll compare 
both approaches highlighting deficiencies in order to spur public 
discussion on the matter.


It might seem strange to use Python to run Bash to run Java applications 
because that introduces IT industry best of breed’ – the Python 
dependency hell – to the Java application code base. The only strangest 
decision one can made is to use Maven to run Docker to run Bash to run 
Python to run Bash to run Java, but desperate times call for desperate 
measures I guess.


There are Java-based solutions for integration testing exists, e.g. 
Testcontainers [1], Arquillian [2], etc, and they might go well for 
Ignite community CI pipelines by them selves. But we also wanted to run 
performance tests and benchmarks, like the dreaded PME benchmark, and 
this is solved by totally different set of tools in Java world, e.g. 
Jmeter [3], OpenJMH [4], Gatling [5], etc.


Speaking specifically about benchmarking, Apache Ignite community 
already has Yardstick [6], and there’s nothing wrong with writing PME 
benchmark using Yardstick, but we also wanted to be able to run 
scenarios like this:

- put an X load to a Ignite database;
- perform an Y set of operations to check how Ignite copes with 
operations under load.


And yes, we also wanted applications under test be deployed ‘like in a 
production’, e.g. distributed over a set of hosts. This arises questions 
about provisioning and nodes affinity which I’ll cover in detail later.


So we decided to put a little effort to build a simple tool to cover 
different integration and performance scenarios, and our QA lab first 
attempt was PoC-Tester [7], currently open source for all but for 
reporting web UI. It’s a quite simple to use 95% Java-based tool 
targeted to be run on a pre-release QA stage.


It covers production-like deployment and running a scenarios over a 
single database instance. PoC-Tester scenarios consists of a sequence of 
tasks running sequentially or in parallel. After all tasks complete, or 
at any time during test, user can run logs collection task, logs are 
checked against exceptions and a summary of found issues and task 
ops/latency statistics is generated at the end of scenario. One of the 
main PoC-Tester features is its fire-and-forget approach to task 
managing. That is, you can deploy a grid and left it running for weeks, 
periodically firing some tasks onto it.


During earliest stages of PoC-Tester development it becomes quite clear 
that Java application development is a tedious process and architecture 
decisions you take during development are slow and hard to change.

For example, scenarios like this
- deploy two instances of GridGain with master-slave data replication 
configured;

- put a load on master;
- perform checks on slave,
or like this:
- preload a 1Tb of data by using your favorite tool of choice to an 
Apache Ignite of version X;
- run a set of functional tests running Apache Ignite version Y over 
preloaded data,

do not fit well in the PoC-Tester workflow.

So, this is why we decided to use Python as a generic scripting language 
of choice.


Pros:
- quicker prototyping and development cycles
- easier to find DevOps/QA engineer with Python skills than one with 
Java skills
- used extensively all over the world for DevOps/CI pipelines and thus 
has rich set of libraries for all possible integration uses cases.


Cons:
- Nightmare with dependencies. Better stick to specific 
language/libraries version.


Comparing alternatives for Python-based testing framework we have 
considered following requirements, somewhat similar to what you’ve 
mentioned for Confluent [8] previously:

- should be able run locally or distributed (bare metal or in the cloud)
- should have built-in deployment facilities for applications under test
- should separate test configuration and test code
-- be able to easily reconfigure tests by simple configuration changes
-- be able to easily scale test environment by simple configuration changes
-- be able to perform regression testing by simple switching artifacts 
under test via configuration
-- be able to run tests with different JDK version by simple 

Partitioned mode issue

2020-06-09 Thread Toni Mateu Sanchez
I have this scenario:

1 Cluster with 2 server nodes and 4 caches configured with partitioned mode.

With this settings, if I try to recover the data with a client node I lose
some information, but if I have the same scenario with just 1 server node, I
recover all data.

Why? Some miss settings?

Thanks in advance.



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: Various shutdown guaranties

2020-06-09 Thread Ivan Rakov
Alex,

Also shutdown policy must be always consistent on the grid or unintentional
> data loss is possible if two nodes are stopping simultaneously with
> different policies.

 Totally agree.

Let's use shutdownPolicy=DEFAULT|GRACEFUL, as was proposed by me earlier.

 I'm ok with GRACEFUL instead of WAIT_FOR_BACKUPS.

5. Let's keep a static property for simplifying setting of initial
> behavior.
> In most cases the policy will never be changed during grid's lifetime.
> No need for an explicit call to API on grid start.
> A joining node should check a local configuration value to match the grid.
> If a dynamic value is already present in a metastore, it should override
> static value with a warning.

To sum it up:
- ShutdownPolicy can be set with static configuration
(IgniteConfiguration#setShutdownPolicy), on join we validate that
statically configured policies on different server nodes are the same
- It's possible to override statically configured value by adding
distributed metastorage value, which can be done by
calling ignite.cluster().setShutdownPolicy(plc) or control.sh method
- Dynamic property is persisted

Generally, I don't mind if we have both dynamic and static configuration
properties. Necessity to call ignite.cluster().setShutdownPolicy(plc); on
every new cluster creation is a usability issue itself.
What bothers me here are the possible conflicts between static and dynamic
configuration. User may be surprised if he has shutdown policy X in
IgniteConfiguration, but the cluster behaves according to policy Y (because
several months ago another admin had called
IgniteCluster#setShutdownPolicy).
We can handle it by adding a separate enum field to the shutdown policy:

> public enum ShutdownPolicy {
>   /* Default value of dynamic shutdown policy property. If it's set, the
> shutdown policy is resolved according to value of static {@link
> IgniteConfiguration#shutdownPolicy} configuration parameter. */
>   USE_STATIC_CONFIGURATION,
>
>   /* Node leaves the cluster even if it's the last owner of some
> partitions. Only partitions of caches with backups > 0 are taken into
> account. */
>   IMMEDIATE,
>
>   /* Shutdown is blocked until node is safe to leave without the data
> loss. */
>   GRACEFUL
> }
>
This way:
1) User may easily understand whether the static parameter is overridden by
dynamic. If ignite.cluster().getShutdownPolicy() return anything except
USE_STATIC_CONFIGURATION, behavior is overridden.
2) User may clear previous overriding by calling
ignite.cluster().setShutdownPolicy(USE_STATIC_CONFIGURATION). After that,
behavior will be resolved based in IgniteConfiguration#shutdownPolicy again.
If we agree on this mechanism, I propose to use IMMEDIATE name instead of
DEFAULT for non-safe policy in order to don't confuse user.
Meanwhile, static configuration will accept the same enum, but
USE_STATIC_CONFIGURATION will be restricted:

> public class IgniteConfiguration {
>   public static final ShutdownPolicy DFLT_STATIC_SHUTDOWN_POLICY =
> IMMEDIATE;
>   private ShutdownPolicy shutdownPolicy = DFLT_STATIC_SHUTDOWN_POLICY;
>   ...
>   public void setShutdownPolicy(ShutdownPolicy shutdownPlc) {
> if (shutdownPlc ==  USE_STATIC_CONFIGURATION)
>   throw new IllegalArgumentException("USE_STATIC_CONFIGURATION can
> only be passed as dynamic property value via
> ignite.cluster().setShutdownPolicy");
> ...
>   }
> ...
> }
>

What do you think?


On Tue, Jun 9, 2020 at 11:46 AM Alexei Scherbakov <
alexey.scherbak...@gmail.com> wrote:

> Ivan Rakov,
>
> Your proposal overall looks good to me. My comments:
>
> 1. I would avoid adding such a method, because it will be impossible to
> change it in the future if some more shutdown policies will be introduced
> later.
> Also shutdown policy must be always consistent on the grid or unintentional
> data loss is possible if two nodes are stopping simultaneously with
> different policies.
>
> This behavior can be achieved by changing policy globally when stopping a
> node:
> ignite.cluster().setShutdownPolicy(DEFAULT);
> ignore.stop();
>
> 2. defaultShutdownPolicy with DEFAULT value is a mess. WAIT_FOR_BACKUPS is
> not very clear either.
> Let's use shutdownPolicy=DEFAULT|GRACEFUL, as was proposed by me earlier.
>
> 3. OK
>
> 4. OK
>
> 5. Let's keep a static property for simplifying setting of initial
> behavior.
> In most cases the policy will never be changed during grid's lifetime.
> No need for an explicit call to API on grid start.
> A joining node should check a local configuration value to match the grid.
> If a dynamic value is already present in a metastore, it should override
> static value with a warning.
>
>
>
>
> пн, 8 июн. 2020 г. в 19:06, Ivan Rakov :
>
> > Vlad, thanks for starting this discussion.
> >
> > I'll try to clarify the motivation for this change as I see it.
> > In general, Ignite clusters are vulnerable to the data loss. Of course,
> we
> > have configurable PartitionLossPolicy, which allows to handle data loss
> > 

[jira] [Created] (IGNITE-13138) Add REST tests for the new cluster state change API

2020-06-09 Thread Sergey Antonov (Jira)
Sergey Antonov created IGNITE-13138:
---

 Summary: Add REST tests for the new cluster state change API
 Key: IGNITE-13138
 URL: https://issues.apache.org/jira/browse/IGNITE-13138
 Project: Ignite
  Issue Type: Improvement
Reporter: Sergey Antonov
Assignee: Sergey Antonov
 Fix For: 2.9


I didn't find tests for a new REST commands introduced in an IGNITE-12225. We 
must fix them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Various shutdown guaranties

2020-06-09 Thread Ivan Rakov
Alex,

I'm not sure there is a problem at all, because user can always query the
> current policy, and a javadoc can describe such behavior clearly.

What will the query method return if the static policy is not overridden?
If we decide to avoid adding dedicated USE_STATIC_CONFIGURATION value,
semantics can be as follows:

> // Returns shutdown policy that is currently used by the cluster
> // If ignite.cluster().setShutdownPolicy() was never called, returns value
> from static configuration {@link IgniteConfiguration#shutdownPolicy}, which
> is consistent across all server nodes
> // If shutdown policy was overridden by user via
> ignite.cluster().setShutdownPolicy(), returns corresponding value

ignite.cluster().getShutdownPolicy();
>

Seems like there will be no need to reset distributed meta storage value.
User can always check which policy is used right now (regardless of whether
it has been overridden) and just set the policy that he needs if he wants
to change it.
The behavior is simple, the only magic is mapping  value in
distributed meta storage to value from IgniteConfiguration#shutdownPolicy.
Can we agree on this?

On Tue, Jun 9, 2020 at 3:48 PM Alexei Scherbakov <
alexey.scherbak...@gmail.com> wrote:

> Ivan,
>
> Using an additional enum on public API for resetting dynamic value looks a
> little bit dirty for me.
> I'm not sure there is a problem at all, because user can always query the
> current policy, and a javadoc can describe such behavior clearly.
> If you really insist maybe use null to reset policy value:
>
> ignite.cluster().setShutdownPolicy(null); // Clear dynamic value and switch
> to statically configured.
>
> On top of this, we already have a bunch of over properties, which are set
> statically and can be changed dynamically later,  for example [1]
> I think all such properties should behave the same way as shutdown policy
> and we need a ticket for this.
> In such a case we probably should go with something like
>
> ignite.cluster().resetDynamicProperValuey(propName); // Resets a property
> to statically configured default value.
>
> Right now I would prefer for shutdown policy behave as other dynamic
> properties to make things consistent and fix them all later to be
> resettable to static configuration value.
>
> [1]
> org.apache.ignite.IgniteCluster#setTxTimeoutOnPartitionMapExchange(timeout)
>
>
>
> вт, 9 июн. 2020 г. в 15:12, Ivan Rakov :
>
> > Something went wrong with gmail formatting. Resending my reply.
> >
> > Alex,
> >
> > Also shutdown policy must be always consistent on the grid or
> unintentional
> > > data loss is possible if two nodes are stopping simultaneously with
> > > different policies.
> >
> >  Totally agree.
> >
> > Let's use shutdownPolicy=DEFAULT|GRACEFUL, as was proposed by me earlier.
> >
> >  I'm ok with GRACEFUL instead of WAIT_FOR_BACKUPS.
> >
> > 5. Let's keep a static property for simplifying setting of initial
> > > behavior.
> > > In most cases the policy will never be changed during grid's lifetime.
> > > No need for an explicit call to API on grid start.
> > > A joining node should check a local configuration value to match the
> > grid.
> > > If a dynamic value is already present in a metastore, it should
> override
> > > static value with a warning.
> >
> > To sum it up:
> > - ShutdownPolicy can be set with static configuration
> > (IgniteConfiguration#setShutdownPolicy), on join we validate that
> > statically configured policies on different server nodes are the same
> > - It's possible to override statically configured value by adding
> > distributed metastorage value, which can be done by
> > calling ignite.cluster().setShutdownPolicy(plc) or control.sh method
> > - Dynamic property is persisted
> >
> > Generally, I don't mind if we have both dynamic and static configuration
> > properties. Necessity to call ignite.cluster().setShutdownPolicy(plc); on
> > every new cluster creation is a usability issue itself.
> > What bothers me here are the possible conflicts between static and
> dynamic
> > configuration. User may be surprised if he has shutdown policy X in
> > IgniteConfiguration, but the cluster behaves according to policy Y
> (because
> > several months ago another admin had called
> > IgniteCluster#setShutdownPolicy).
> > We can handle it by adding a separate enum field to the shutdown policy:
> >
> > > public enum ShutdownPolicy {
> > >   /* Default value of dynamic shutdown policy property. If it's set,
> the
> > > shutdown policy is resolved according to value of static {@link
> > > IgniteConfiguration#shutdownPolicy} configuration parameter. */
> > >   USE_STATIC_CONFIGURATION,
> > >
> > >   /* Node leaves the cluster even if it's the last owner of some
> > > partitions. Only partitions of caches with backups > 0 are taken into
> > > account. */
> > >   IMMEDIATE,
> > >
> > >   /* Shutdown is blocked until node is safe to leave without the data
> > > loss. */
> > >   GRACEFUL
> > > }
> > >
> > This way:
> > 1) User may 

[jira] [Created] (IGNITE-13137) WAL reservation may failed even though the required segment is available

2020-06-09 Thread Vyacheslav Koptilin (Jira)
Vyacheslav Koptilin created IGNITE-13137:


 Summary: WAL reservation may failed even though the required 
segment is available
 Key: IGNITE-13137
 URL: https://issues.apache.org/jira/browse/IGNITE-13137
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 2.8
Reporter: Vyacheslav Koptilin


It seems there is a race in {{FileWriteAheadLogManager}} that may lead to the 
inability to reserve a WAL segment. 
Let's consider the following scenario:
 - log WAL record that requires a rollover of the current segment.
 - archiver is moving the WAL file to an archive folder
 - trying to reserve this segment


{code:java}
@Override public boolean reserve(WALPointer start) {
...
segmentAware.reserve(((FileWALPointer)start).index());

if (!hasIndex(((FileWALPointer)start).index())) {  <-- hasIndex 
returns false
segmentAware.release(((FileWALPointer)start).index());

return false;
}
...
}

private boolean hasIndex(long absIdx) {
...
boolean inArchive = new File(walArchiveDir, segmentName).exists() ||
new File(walArchiveDir, zipSegmentName).exists();

if (inArchive)<-- At this point, the required WAL segment is not moved 
yet, so inArchive == false
return true;

if (absIdx <= lastArchivedIndex()) <-- lastArchivedIndex() scans archive 
directory and finds a new WAL segment, and absIdx == lastArchivedIndex!
return false;

FileWriteHandle cur = currHnd;

return cur != null && cur.getSegmentId() >= absIdx;
}

{code}

Besides this race, it seems to me, the behavior of WAL reservation should be 
improved in a case when the required segment is already reserved/locked. In 
that particular case, we don't need to check WAL archive directory at all.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13136) Calcite integration. Improve join predicate testing.

2020-06-09 Thread Roman Kondakov (Jira)
Roman Kondakov created IGNITE-13136:
---

 Summary: Calcite integration. Improve join predicate testing.
 Key: IGNITE-13136
 URL: https://issues.apache.org/jira/browse/IGNITE-13136
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Roman Kondakov


Currently we have to merge joining rows in order to test a join predicate:
{code:java}
Row row = handler.concat(left, rightMaterialized.get(rightIdx++));

if (!cond.test(row))
continue;
{code}
it results in unconditional building a joining row even if it will not be 
emitted to downstream further. To avoid extra GC pressure we need to test the 
join predicate before joining rows:
{code:java}
if (!cond.test(left, right))
continue;

Row row = handler.concat(left, right);
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Various shutdown guaranties

2020-06-09 Thread Alexei Scherbakov
Ivan,

Using an additional enum on public API for resetting dynamic value looks a
little bit dirty for me.
I'm not sure there is a problem at all, because user can always query the
current policy, and a javadoc can describe such behavior clearly.
If you really insist maybe use null to reset policy value:

ignite.cluster().setShutdownPolicy(null); // Clear dynamic value and switch
to statically configured.

On top of this, we already have a bunch of over properties, which are set
statically and can be changed dynamically later,  for example [1]
I think all such properties should behave the same way as shutdown policy
and we need a ticket for this.
In such a case we probably should go with something like

ignite.cluster().resetDynamicProperValuey(propName); // Resets a property
to statically configured default value.

Right now I would prefer for shutdown policy behave as other dynamic
properties to make things consistent and fix them all later to be
resettable to static configuration value.

[1] org.apache.ignite.IgniteCluster#setTxTimeoutOnPartitionMapExchange(timeout)



вт, 9 июн. 2020 г. в 15:12, Ivan Rakov :

> Something went wrong with gmail formatting. Resending my reply.
>
> Alex,
>
> Also shutdown policy must be always consistent on the grid or unintentional
> > data loss is possible if two nodes are stopping simultaneously with
> > different policies.
>
>  Totally agree.
>
> Let's use shutdownPolicy=DEFAULT|GRACEFUL, as was proposed by me earlier.
>
>  I'm ok with GRACEFUL instead of WAIT_FOR_BACKUPS.
>
> 5. Let's keep a static property for simplifying setting of initial
> > behavior.
> > In most cases the policy will never be changed during grid's lifetime.
> > No need for an explicit call to API on grid start.
> > A joining node should check a local configuration value to match the
> grid.
> > If a dynamic value is already present in a metastore, it should override
> > static value with a warning.
>
> To sum it up:
> - ShutdownPolicy can be set with static configuration
> (IgniteConfiguration#setShutdownPolicy), on join we validate that
> statically configured policies on different server nodes are the same
> - It's possible to override statically configured value by adding
> distributed metastorage value, which can be done by
> calling ignite.cluster().setShutdownPolicy(plc) or control.sh method
> - Dynamic property is persisted
>
> Generally, I don't mind if we have both dynamic and static configuration
> properties. Necessity to call ignite.cluster().setShutdownPolicy(plc); on
> every new cluster creation is a usability issue itself.
> What bothers me here are the possible conflicts between static and dynamic
> configuration. User may be surprised if he has shutdown policy X in
> IgniteConfiguration, but the cluster behaves according to policy Y (because
> several months ago another admin had called
> IgniteCluster#setShutdownPolicy).
> We can handle it by adding a separate enum field to the shutdown policy:
>
> > public enum ShutdownPolicy {
> >   /* Default value of dynamic shutdown policy property. If it's set, the
> > shutdown policy is resolved according to value of static {@link
> > IgniteConfiguration#shutdownPolicy} configuration parameter. */
> >   USE_STATIC_CONFIGURATION,
> >
> >   /* Node leaves the cluster even if it's the last owner of some
> > partitions. Only partitions of caches with backups > 0 are taken into
> > account. */
> >   IMMEDIATE,
> >
> >   /* Shutdown is blocked until node is safe to leave without the data
> > loss. */
> >   GRACEFUL
> > }
> >
> This way:
> 1) User may easily understand whether the static parameter is overridden by
> dynamic. If ignite.cluster().getShutdownPolicy() return anything except
> USE_STATIC_CONFIGURATION, behavior is overridden.
> 2) User may clear previous overriding by calling
> ignite.cluster().setShutdownPolicy(USE_STATIC_CONFIGURATION). After that,
> behavior will be resolved based in IgniteConfiguration#shutdownPolicy
> again.
> If we agree on this mechanism, I propose to use IMMEDIATE name instead of
> DEFAULT for non-safe policy in order to don't confuse user.
> Meanwhile, static configuration will accept the same enum, but
> USE_STATIC_CONFIGURATION will be restricted:
>
> > public class IgniteConfiguration {
> >   public static final ShutdownPolicy DFLT_STATIC_SHUTDOWN_POLICY =
> > IMMEDIATE;
> >   private ShutdownPolicy shutdownPolicy = DFLT_STATIC_SHUTDOWN_POLICY;
> >   ...
> >   public void setShutdownPolicy(ShutdownPolicy shutdownPlc) {
> > if (shutdownPlc ==  USE_STATIC_CONFIGURATION)
> >   throw new IllegalArgumentException("USE_STATIC_CONFIGURATION can
> > only be passed as dynamic property value via
> > ignite.cluster().setShutdownPolicy");
> > ...
> >   }
> > ...
> > }
> >
>
> What do you think?
>
> On Tue, Jun 9, 2020 at 3:09 PM Ivan Rakov  wrote:
>
> > Alex,
> >
> > Also shutdown policy must be always consistent on the grid or
> unintentional
> >> data loss is possible if two nodes are stopping 

Re: Various shutdown guaranties

2020-06-09 Thread Ivan Rakov
Something went wrong with gmail formatting. Resending my reply.

Alex,

Also shutdown policy must be always consistent on the grid or unintentional
> data loss is possible if two nodes are stopping simultaneously with
> different policies.

 Totally agree.

Let's use shutdownPolicy=DEFAULT|GRACEFUL, as was proposed by me earlier.

 I'm ok with GRACEFUL instead of WAIT_FOR_BACKUPS.

5. Let's keep a static property for simplifying setting of initial
> behavior.
> In most cases the policy will never be changed during grid's lifetime.
> No need for an explicit call to API on grid start.
> A joining node should check a local configuration value to match the grid.
> If a dynamic value is already present in a metastore, it should override
> static value with a warning.

To sum it up:
- ShutdownPolicy can be set with static configuration
(IgniteConfiguration#setShutdownPolicy), on join we validate that
statically configured policies on different server nodes are the same
- It's possible to override statically configured value by adding
distributed metastorage value, which can be done by
calling ignite.cluster().setShutdownPolicy(plc) or control.sh method
- Dynamic property is persisted

Generally, I don't mind if we have both dynamic and static configuration
properties. Necessity to call ignite.cluster().setShutdownPolicy(plc); on
every new cluster creation is a usability issue itself.
What bothers me here are the possible conflicts between static and dynamic
configuration. User may be surprised if he has shutdown policy X in
IgniteConfiguration, but the cluster behaves according to policy Y (because
several months ago another admin had called
IgniteCluster#setShutdownPolicy).
We can handle it by adding a separate enum field to the shutdown policy:

> public enum ShutdownPolicy {
>   /* Default value of dynamic shutdown policy property. If it's set, the
> shutdown policy is resolved according to value of static {@link
> IgniteConfiguration#shutdownPolicy} configuration parameter. */
>   USE_STATIC_CONFIGURATION,
>
>   /* Node leaves the cluster even if it's the last owner of some
> partitions. Only partitions of caches with backups > 0 are taken into
> account. */
>   IMMEDIATE,
>
>   /* Shutdown is blocked until node is safe to leave without the data
> loss. */
>   GRACEFUL
> }
>
This way:
1) User may easily understand whether the static parameter is overridden by
dynamic. If ignite.cluster().getShutdownPolicy() return anything except
USE_STATIC_CONFIGURATION, behavior is overridden.
2) User may clear previous overriding by calling
ignite.cluster().setShutdownPolicy(USE_STATIC_CONFIGURATION). After that,
behavior will be resolved based in IgniteConfiguration#shutdownPolicy again.
If we agree on this mechanism, I propose to use IMMEDIATE name instead of
DEFAULT for non-safe policy in order to don't confuse user.
Meanwhile, static configuration will accept the same enum, but
USE_STATIC_CONFIGURATION will be restricted:

> public class IgniteConfiguration {
>   public static final ShutdownPolicy DFLT_STATIC_SHUTDOWN_POLICY =
> IMMEDIATE;
>   private ShutdownPolicy shutdownPolicy = DFLT_STATIC_SHUTDOWN_POLICY;
>   ...
>   public void setShutdownPolicy(ShutdownPolicy shutdownPlc) {
> if (shutdownPlc ==  USE_STATIC_CONFIGURATION)
>   throw new IllegalArgumentException("USE_STATIC_CONFIGURATION can
> only be passed as dynamic property value via
> ignite.cluster().setShutdownPolicy");
> ...
>   }
> ...
> }
>

What do you think?

On Tue, Jun 9, 2020 at 3:09 PM Ivan Rakov  wrote:

> Alex,
>
> Also shutdown policy must be always consistent on the grid or unintentional
>> data loss is possible if two nodes are stopping simultaneously with
>> different policies.
>
>  Totally agree.
>
> Let's use shutdownPolicy=DEFAULT|GRACEFUL, as was proposed by me earlier.
>
>  I'm ok with GRACEFUL instead of WAIT_FOR_BACKUPS.
>
> 5. Let's keep a static property for simplifying setting of initial
>> behavior.
>> In most cases the policy will never be changed during grid's lifetime.
>> No need for an explicit call to API on grid start.
>> A joining node should check a local configuration value to match the grid.
>> If a dynamic value is already present in a metastore, it should override
>> static value with a warning.
>
> To sum it up:
> - ShutdownPolicy can be set with static configuration
> (IgniteConfiguration#setShutdownPolicy), on join we validate that
> statically configured policies on different server nodes are the same
> - It's possible to override statically configured value by adding
> distributed metastorage value, which can be done by
> calling ignite.cluster().setShutdownPolicy(plc) or control.sh method
> - Dynamic property is persisted
>
> Generally, I don't mind if we have both dynamic and static configuration
> properties. Necessity to call ignite.cluster().setShutdownPolicy(plc); on
> every new cluster creation is a usability issue itself.
> What bothers me here are the possible conflicts between static and 

Re: Various shutdown guaranties

2020-06-09 Thread Ivan Rakov
Alex,

Also shutdown policy must be always consistent on the grid or unintentional
> data loss is possible if two nodes are stopping simultaneously with
> different policies.

 Totally agree.

Let's use shutdownPolicy=DEFAULT|GRACEFUL, as was proposed by me earlier.

 I'm ok with GRACEFUL instead of WAIT_FOR_BACKUPS.

5. Let's keep a static property for simplifying setting of initial
> behavior.
> In most cases the policy will never be changed during grid's lifetime.
> No need for an explicit call to API on grid start.
> A joining node should check a local configuration value to match the grid.
> If a dynamic value is already present in a metastore, it should override
> static value with a warning.

To sum it up:
- ShutdownPolicy can be set with static configuration
(IgniteConfiguration#setShutdownPolicy), on join we validate that
statically configured policies on different server nodes are the same
- It's possible to override statically configured value by adding
distributed metastorage value, which can be done by
calling ignite.cluster().setShutdownPolicy(plc) or control.sh method
- Dynamic property is persisted

Generally, I don't mind if we have both dynamic and static configuration
properties. Necessity to call ignite.cluster().setShutdownPolicy(plc); on
every new cluster creation is a usability issue itself.
What bothers me here are the possible conflicts between static and dynamic
configuration. User may be surprised if he has shutdown policy X in
IgniteConfiguration, but the cluster behaves according to policy Y (because
several months ago another admin had called
IgniteCluster#setShutdownPolicy).
We can handle it by adding a separate enum field to the shutdown policy:

> public enum ShutdownPolicy {
>   /* Default value of dynamic shutdown policy property. If it's set, the
> shutdown policy is resolved according to value of static {@link
> IgniteConfiguration#shutdownPolicy} configuration parameter. */
>   USE_STATIC_CONFIGURATION,
>
>   /* Node leaves the cluster even if it's the last owner of some
> partitions. Only partitions of caches with backups > 0 are taken into
> account. */
>   IMMEDIATE,
>
>   /* Shutdown is blocked until node is safe to leave without the data
> loss. */
>   GRACEFUL
> }
>
This way:
1) User may easily understand whether the static parameter is overridden by
dynamic. If ignite.cluster().getShutdownPolicy() return anything except
USE_STATIC_CONFIGURATION, behavior is overridden.
2) User may clear previous overriding by calling
ignite.cluster().setShutdownPolicy(USE_STATIC_CONFIGURATION). After that,
behavior will be resolved based in IgniteConfiguration#shutdownPolicy again.
If we agree on this mechanism, I propose to use IMMEDIATE name instead of
DEFAULT for non-safe policy in order to don't confuse user.
Meanwhile, static configuration will accept the same enum, but
USE_STATIC_CONFIGURATION will be restricted:

> public class IgniteConfiguration {
>   public static final ShutdownPolicy DFLT_STATIC_SHUTDOWN_POLICY =
> IMMEDIATE;
>   private ShutdownPolicy shutdownPolicy = DFLT_STATIC_SHUTDOWN_POLICY;
>   ...
>   public void setShutdownPolicy(ShutdownPolicy shutdownPlc) {
> if (shutdownPlc ==  USE_STATIC_CONFIGURATION)
>   throw new IllegalArgumentException("USE_STATIC_CONFIGURATION can
> only be passed as dynamic property value via
> ignite.cluster().setShutdownPolicy");
> ...
>   }
> ...
> }
>

What do you think?

On Tue, Jun 9, 2020 at 11:46 AM Alexei Scherbakov <
alexey.scherbak...@gmail.com> wrote:

> Ivan Rakov,
>
> Your proposal overall looks good to me. My comments:
>
> 1. I would avoid adding such a method, because it will be impossible to
> change it in the future if some more shutdown policies will be introduced
> later.
> Also shutdown policy must be always consistent on the grid or unintentional
> data loss is possible if two nodes are stopping simultaneously with
> different policies.
>
> This behavior can be achieved by changing policy globally when stopping a
> node:
> ignite.cluster().setShutdownPolicy(DEFAULT);
> ignore.stop();
>
> 2. defaultShutdownPolicy with DEFAULT value is a mess. WAIT_FOR_BACKUPS is
> not very clear either.
> Let's use shutdownPolicy=DEFAULT|GRACEFUL, as was proposed by me earlier.
>
> 3. OK
>
> 4. OK
>
> 5. Let's keep a static property for simplifying setting of initial
> behavior.
> In most cases the policy will never be changed during grid's lifetime.
> No need for an explicit call to API on grid start.
> A joining node should check a local configuration value to match the grid.
> If a dynamic value is already present in a metastore, it should override
> static value with a warning.
>
>
>
>
> пн, 8 июн. 2020 г. в 19:06, Ivan Rakov :
>
> > Vlad, thanks for starting this discussion.
> >
> > I'll try to clarify the motivation for this change as I see it.
> > In general, Ignite clusters are vulnerable to the data loss. Of course,
> we
> > have configurable PartitionLossPolicy, which allows to handle data loss
> > safely 

Partitioned mode issue

2020-06-09 Thread Toni Mateu Sanchez
I have this scenario:

1 Cluster with 2 server nodes and 4 caches configured with partitioned mode.

With this settings, if I try to recover the data with a client node I lose
some information, but if I have the same scenario with just 1 server node, I
recover all data.

Why? Some miss settings?

Thanks in advance.



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: Can I get write permissions for Apache Wiki?

2020-06-09 Thread Ilya Kasnacheev
Hello!

What is your login for apache wiki?

Regards,
-- 
Ilya Kasnacheev


вт, 9 июн. 2020 г. в 10:11, Данилов Семён :

> ello Igniters,
>
> I’m planning to change
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood
> (because I have pending pull request for changing binary meta location),
> but don’t have rights for this.
> Can someone please give me the access?
>
> Sincerely yours,
> Semyon Danilov.
>
>


Re: Various shutdown guaranties

2020-06-09 Thread Alexei Scherbakov
Ivan Rakov,

Your proposal overall looks good to me. My comments:

1. I would avoid adding such a method, because it will be impossible to
change it in the future if some more shutdown policies will be introduced
later.
Also shutdown policy must be always consistent on the grid or unintentional
data loss is possible if two nodes are stopping simultaneously with
different policies.

This behavior can be achieved by changing policy globally when stopping a
node:
ignite.cluster().setShutdownPolicy(DEFAULT);
ignore.stop();

2. defaultShutdownPolicy with DEFAULT value is a mess. WAIT_FOR_BACKUPS is
not very clear either.
Let's use shutdownPolicy=DEFAULT|GRACEFUL, as was proposed by me earlier.

3. OK

4. OK

5. Let's keep a static property for simplifying setting of initial
behavior.
In most cases the policy will never be changed during grid's lifetime.
No need for an explicit call to API on grid start.
A joining node should check a local configuration value to match the grid.
If a dynamic value is already present in a metastore, it should override
static value with a warning.




пн, 8 июн. 2020 г. в 19:06, Ivan Rakov :

> Vlad, thanks for starting this discussion.
>
> I'll try to clarify the motivation for this change as I see it.
> In general, Ignite clusters are vulnerable to the data loss. Of course, we
> have configurable PartitionLossPolicy, which allows to handle data loss
> safely and mitigate its consequences. But being able to avoid critical
> situations is always better than being able to recover from it.
>
> The most common issue from my perspective is absence of a way to perform
> rolling cluster restart safely. Scenario:
> 1. Backup count is 1
> 2. Admin wants to perform rolling restart in order to deploy new version of
> business code that uses Ignite in embedded mode
> 3. Admin shuts down first node, replaces needed binaries and returns the
> node back to the topology
> 4. Node joins the cluster successfully
> 5. Admin shuts down second node
> 6. Data loss happens: the second node was the only owner of a certain
> partition, which was being rebalanced from the second node to the first
>
> We can prevent such situations by introducing "safe shutdown by default"
> mode, which blocks stopping node while it remains the only owner for at
> least one partition. It should be applied to "common" ways of stopping
> nodes - Ignite.close() and kill .
> I think, option to be enabled or disabled in runtime should be a
> requirement for this behavior. Safe shutdown mode has weird side-effects.
> For example, admin won't be able to stop the whole cluster: stop of last
> node will be blocked, because the last node is the only present owner of
> all its partitions. Sure, kill -9 will resolve it, but it's still a
> usability issue.
>
> With the described dynamic property scenario will be changed as follows:
> 1. Admin enables "safe shutdown" mode
> 2. Admin shuts down first node, replaces needed binaries and returns the
> node back to the topology
> 3. Admin shuts down second node (with either ignite.close() or kill ),
> shutdown is blocked until the first node returns to the topology and
> completes the rebalancing process
> 4. Admin proceeds the rolling restart procedure
> 5. Admin disables "safe shutdown" mode
>
> This logic will also simplify the rolling restart scenario in K8S. Pod with
> Ignite node won't be terminated until its termination will cause data loss.
>
> Aside from waiting for backups, Ignition interface provide lots of options
> to perform various node stop:
> - Whether or not to cancel pending compute jobs
> - Whether or not to perform instant halt() instead of any graceful stop
> logic
> - Whether or not to wait for some timeout before halt()
> - Whether or not the stopped grid should be restarted
> All these "stop" methods provide very custom logic. I don't see a need to
> make them part of dynamic cluster-wide configuration. They still can be
> invoked directly via Java API. Later we can extract some of them to dynamic
> cluster-wide parameters of default stop if it will become necessary. That's
> why I think we should create an enum for default shutdown policy, but only
> with two options so far (we can add more later): DEFAULT and
> WAIT_FOR_BACKUPS.
> Regarding the "NORMAL" option that you propose (where the node is not
> stopped until the rebalance is finished): I don't think that we should add
> it. It doesn't ensure any strict guarantees: the data still can be lost
> with it.
>
> To sum it up, I propose:
> 1. Add a new method to Ignition interface to make it possible to stop with
> "wait for backups" logic directly via Java API, like Ignition.stop(boolean
> cancel, boolean waitForBackups)
> 2. Introduce "defaultShutdownPolicy" as a dynamic cluster configuration,
> two values are available so far: DEFAULT and WAIT_FOR_BACKUPS
> 3. This property is stored in the distributed metastorage (thus persisted),
> can be changed via Java API and ./control.sh
> 4. Behavior configured with this 

Can I get write permissions for Apache Wiki?

2020-06-09 Thread Данилов Семён
ello Igniters,

I’m planning to change 
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood
 (because I have pending pull request for changing binary meta location), but 
don’t have rights for this.
Can someone please give me the access?

Sincerely yours,
Semyon Danilov.



[jira] [Created] (IGNITE-13135) CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest failed

2020-06-09 Thread Aleksey Plekhanov (Jira)
Aleksey Plekhanov created IGNITE-13135:
--

 Summary: 
CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest
 failed
 Key: IGNITE-13135
 URL: https://issues.apache.org/jira/browse/IGNITE-13135
 Project: Ignite
  Issue Type: Bug
Reporter: Aleksey Plekhanov
Assignee: Aleksey Plekhanov


Test failed with error:
{noformat}
java.lang.AssertionError: [] 
Expected :2
Actual   :0
at 
org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:119)
at 
org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCustomMessages(CacheRegisterMetadataLocallyTest.java:230)
at 
org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:153){noformat}
After fix IGNITE-13096



--
This message was sent by Atlassian Jira
(v8.3.4#803005)