[MTCGA]: new failures in builds [3483478] needs to be handled

2019-04-01 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New test failure in master JdbcThinConnectionSelfTest.testInvalidEndpoint 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-20265605831223746=%3Cdefault%3E=testDetails
 Changes may lead to failure were done by 
 - tledkov 
https://ci.ignite.apache.org/viewModification.html?modId=879244
 - rshtykh 
https://ci.ignite.apache.org/viewModification.html?modId=879258

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 07:28:26 02-04-2019 


[jira] [Created] (IGNITE-11669) Research/test reflection based approach for creating direct buffers

2019-04-01 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-11669:
---

 Summary: Research/test reflection based approach for creating 
direct buffers
 Key: IGNITE-11669
 URL: https://issues.apache.org/jira/browse/IGNITE-11669
 Project: Ignite
  Issue Type: Sub-task
Reporter: Dmitriy Pavlov
Assignee: Dmitriy Pavlov


In 2.7.5 discussion 
https://lists.apache.org/thread.html/e575a96bd1eb2fe323006314c15f9fcce7400d56b8ba7a5587ebe44c@%3Cdev.ignite.apache.org%3E

Ivan Pavluchin proposed a simple fix for the byte buffer creation problem:
https://lists.apache.org/thread.html/84a35e720af7a0af849685d6abfd7d80a72eab9d7513106262568afa@%3Cdev.ignite.apache.org%3E

It is necessary to test this approach (probably enforcing to apply it for all 
Javas)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


RE: in-memory compression

2019-04-01 Thread 999.computing
Hi Vladislav Pyatkov,

> What means the thesis? Your dataset is fit for columnar compression, ea. 
> repeating values and/or a timeseries-like
Columnar compression relies on repeating values. If every row has a different 
value for a particular column, columnar compression does not give an advantage 
for that column.

> As I undersent, you are overwrite file io factory (TBLigniteFileIoFactory). 
> But in this lavel available only bytes, not data entries.
We deserialize the Ignite pages ourselves to get the data entries.

> What caused the performance drop to 31% in your test?
That testcase has a 4 cpu node with 4 clients continuously running queries. If 
a node is cpu starved, compression can not give a performance advantage and it 
is better to read directly from disk instead. 

Pascal


-Original Message-
From: Vladislav Pyatkov  
Sent: Monday, 1 April 2019 10:49
To: dev@ignite.apache.org
Subject: Re: in-memory compression

Hi,

I looked you report in site and have some questions.

What means the thesis?
*Your dataset is fit for columnar compression, ea. repeating values and/or a 
timeseries-like*
*dataset.*
As I undersent, you are overwrite file io factory (TBLigniteFileIoFactory).
But in this lavel available only bytes, not data entries.

What caused the performance drop to 31% in your test?

On Mon, Apr 1, 2019 at 8:54 AM <999.comput...@gmail.com> wrote:

> Hi developers,
>
> We have released TBLignite compression, an Ignite plugin that provides 
> in-memory compression: http://tblcore.com/download/.
> Compression rates are similar to those of SQL Server 2016 columnar 
> compression (10-20x) and our testing show significant performance 
> improvements for datasets that are larger than the available amount of 
> memory.
>
> Currently we are at version 0.1, but we are working on a thoroughly 
> tested
> 1.0 release so everyone can scratch their compression itch. A couple 
> of questions regarding this:
> 1) Are there any Ignite regression tests that you can recommend for 
> 3rd party software? Basically we are looking for a way to test all the 
> possible page formats that we need to support.
> 2) Once we release version 1.0, we would like TBLignite to be added to 
> the Ignite "3rd party binary" page. Who decides what gets on this page?
> 3) And are there any acceptance criteria or tests that need to be passed?
>
> We would like to hear from you.
> Pascal Schuchhard
> TBLcore
>
>

--
Vladislav Pyatkov



Re: Thin client: transactions support

2019-04-01 Thread Alex Plehanov
Dmitriy, thank you!

Guys, I've created the IEP [1] on wiki, please have a look.

[1]
https://cwiki.apache.org/confluence/display/IGNITE/IEP-34+Thin+client%3A+transactions+support


чт, 28 мар. 2019 г. в 14:33, Dmitriy Pavlov :

> Hi,
>
> I've added permissions to account plehanov.alex
>
> Recently Infra integrated Apache LDAP with confluence, so it is possible to
> login using Apache credentials. Probably we can ask infra if extra
> permissions to edit pages should be added for committers.
>
> Sincerely,
> Dmitriy Pavlov
>
> ср, 27 мар. 2019 г. в 13:37, Alex Plehanov :
>
> > Vladimir,
> >
> > About current tx: ok, then we don't need tx() method in the interface at
> > all (the same cached transaction info user can store by himself).
> >
> > About decoupling transactions from threads on the server side: for now,
> we
> > can start with thread-per-connection approach (we only can support one
> > active transaction per connection, see below, so we need one additional
> > dedicated thread for each connection with active transaction), and later
> > change server-side internals to process client transactions in any server
> > thread (not dedicated to this connection). This change will not affect
> the
> > thin client protocol, it only affects the server side.
> > In any case, we can't support concurrent transactions per connection on
> > the client side without fundamental changes to the current protocol
> (cache
> > operation doesn't bound to transaction or thread and the server doesn't
> > know which thread on the client side do this cache operation). In my
> > opinion, if a user wants to use concurrent transactions, he must use
> > different connections from a connection pool.
> >
> > About semantics of suspend/resume on the client-side: it's absolutely
> > different than server-side semantics (we don't need to do suspend/resume
> to
> > pass transaction between threads on the client-side), but can't be
> > implemented efficiently without implemented suspend/resume on
> server-side.
> >
> > Can anyone give me permissions to create IEP on Apache wiki?
> >
> > ср, 27 мар. 2019 г. в 11:59, Vladimir Ozerov :
> >
> > > Hi Alex,
> > >
> > > My comments was only about the protocol. Getting current info about
> > > transaction should be handled by the client itself. It is not protocl's
> > > concern. Same about other APIs and behavior in case another transaction
> > is
> > > attempted from the same thread.
> > >
> > > Putting protocol aside, transaction support is complicated matter. I
> > would
> > > propose to route through IEP and wide community discussion. We need to
> > > review API and semantics very carefully, taking SUSPEND/RESUME in
> count.
> > > Also I do not see how we support client transactions efficiently
> without
> > > decoupling transactions from threads on the server side first. Because
> > > without it you will need a dedicated server thread for every client's
> > > transaction which is slow and may even crash the server.
> > >
> > > Vladimir.
> > >
> > > On Wed, Mar 27, 2019 at 11:44 AM Alex Plehanov <
> plehanov.a...@gmail.com>
> > > wrote:
> > >
> > > > Vladimir, what if we want to get current transaction info (tx()
> > method)?
> > > >
> > > > Does close() method mapped to TX_END(rollback)?
> > > >
> > > > For example, this code:
> > > >
> > > > try(tx = txStart()) {
> > > > tx.commit();
> > > > }
> > > >
> > > > Will produce:
> > > > TX_START
> > > > TX_END(commit)
> > > > TX_END(rollback)
> > > >
> > > > Am I understand you right?
> > > >
> > > > About xid. There is yet another proposal. Use some unique per
> > connection
> > > id
> > > > (integer, simple counter) for identifying the transaction on
> > > > commit/rollback message. The client gets this id from the server with
> > > > transaction info and sends it back to the server when trying to
> > > > commit/rollback transaction. This id is not shown to users. But also
> we
> > > can
> > > > pass from server to client real transaction id (xid) with transaction
> > > info
> > > > for diagnostic purposes.
> > > >
> > > > And one more question: what should we do if the client starts a new
> > > > transaction without ending the old one? Should we end the old
> > transaction
> > > > implicitly (rollback) or throw an exception to the client? In my
> > opinion,
> > > > the first option is better. For example, if we got a previously used
> > > > connection from the connection pool, we should not worry about any
> > > > uncompleted transaction started by the previous user of this
> > connection.
> > > >
> > > > ср, 27 мар. 2019 г. в 11:02, Vladimir Ozerov :
> > > >
> > > > > As far as SUSPEND/RESUME/SAVEPOINT - we do not support them yet,
> and
> > > > adding
> > > > > them in future should not conflict with simple START/END
> > > infrastructure.
> > > > >
> > > > > On Wed, Mar 27, 2019 at 11:00 AM Vladimir Ozerov <
> > voze...@gridgain.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Hi Alex,
> > > > > >
> > > > > > I am not sure we need 5 

[jira] [Created] (IGNITE-11668) OSGI: Self-imported package causes failure upon Karaf restart

2019-04-01 Thread Oleksii Mohylin (JIRA)
Oleksii Mohylin created IGNITE-11668:


 Summary: OSGI: Self-imported package causes failure upon Karaf 
restart
 Key: IGNITE-11668
 URL: https://issues.apache.org/jira/browse/IGNITE-11668
 Project: Ignite
  Issue Type: Bug
  Components: osgi
Affects Versions: 2.7
Reporter: Oleksii Mohylin


I've got problem using Ignite 2.7 in Apache Karaf 4.2.0. The problem is caused 
by strange bundle meta of ignite-osgi. It exports package 
org.apache.ignite.osgi.classloaders and imports it at the same time. Here's 
extract from METADATA.MF:
{noformat}
Import-Package: org.apache.ignite;version="[2.7,3)",org.apache.ignite.
configuration;version="[2.7,3)",org.apache.ignite.internal.util;versi
on="[2.7,3)",org.apache.ignite.internal.util.tostring;version="[2.7,3
)",org.apache.ignite.internal.util.typedef.internal;version="[2.7,3)"
,org.apache.ignite.osgi.classloaders,org.osgi.framework;version="[1.7
,2)"
Require-Capability: osgi.ee;filter:="(&(osgi.ee=JavaSE)(version=1.8))"
Fragment-Host: org.apache.ignite.ignite-core
Export-Package: org.apache.ignite.osgi.classloaders;uses:="org.apache.
ignite.internal.util.tostring,org.osgi.framework";version="2.7.0",org
.apache.ignite.osgi;uses:="org.apache.ignite,org.apache.ignite.config
uration,org.apache.ignite.osgi.classloaders,org.osgi.framework";versi
on="2.7.0"
{noformat}
There is no problem with initial installation of my application into Karaf, 
although after Karaf ignite-osgi dependency is not resolved, and this exception 
in log:
{noformat}
org.osgi.framework.BundleException: Unable to resolve graphql-core [399](R 
399.0): missing requirement [graphql-core [399](R 399.0)] osgi.wiring.package; 
(&(osgi.wiring.package=org.apache.ignite.osgi.classloaders)(version>=2.7.0)(!(version>=3.0.0)))
 [caused by: Unable to resolve org.apache.ignite.ignite-osgi [432](R 432.0): 
missing requirement [org.apache.ignite.ignite-osgi [432](R 432.0)] 
osgi.wiring.host; 
(&(osgi.wiring.host=org.apache.ignite.ignite-core)(bundle-version>=0.0.0))] 
Unresolved requirements: [[graphql-core [399](R 399.0)] osgi.wiring.package; 
(&(osgi.wiring.package=org.apache.ignite.osgi.classloaders)(version>=2.7.0)(!(version>=3.0.0)))]

{noformat}
*Proposed solution:*
 remove the self-import by adding instruction to bundle plugin configuration in 
modules/osgi/pom.xml
{code:java}

org.apache.felix
maven-bundle-plugin


org.apache.ignite.ignite-core

!org.apache.ignite.osgi.classloaders,*



{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: JVM tuning parameters

2019-04-01 Thread Andrey Gura
Hi,

I don't see any mentions of OOM. Provided log message reports blocking
of db-checkpoint-thread. I think worker tries to acquire checkpoint
read lock.
Stack trace corresponds to the thread that detected blocking. Failure
handler prints out threads dump to the log. This thread dump can help
in problem analysis.

Also more detailed case description is required (it's just creation of
400 tables or some data also adding to the tables).

And finally... 1 core - is too hard restriction from my point of view.

On Sat, Mar 30, 2019 at 9:56 AM Denis Magda  wrote:
>
> Hi,
>
> How does the JVM error look like?
>
> Apart from that, Andrey, Igniters, the failure handler fired off but I have 
> no glue from the shared logs what happened or how it is connected to the Java 
> heap issues. Should I expect to see anything from the logs not added to the 
> thread?
>
> --
> Denis Magda
>
>
> On Thu, Mar 28, 2019 at 6:58 AM ashfaq  wrote:
>>
>> Hi Team,
>>
>> We are installing ignite on kubernetes environment with native persistence
>> enabled. When we try to create around 400 tables using the sqlline end point
>> ,  the pods are restarting after creating 200 tables with jvm heap error so
>> we have increased the java heap size from 1GB to 2GB and this time it failed
>> at 300 tables.
>>
>> We would like to know how can we arrive at jvm heap size . Also we want to
>> know how do we configure such that the pods are not restarted and the
>> cluster is stable.
>>
>> Below are the current values that we have used.
>>
>> cpu - 1core
>> xms - 1GB
>> xmx - 2GB
>> RAM - 3GB
>>
>> Below is the error log:
>>
>> "Critical system error detected. Will be handled accordingly to configured
>> handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
>> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]],
>> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
>> o.a.i.IgniteException: GridWorker [name=db-checkpoint-thread,
>> igniteInstanceName=null, finished=false, heartbeatTs=1553771825864]]] class
>> org.apache.ignite.IgniteException: GridWorker [name=db-checkpoint-thread,
>> igniteInstanceName=null, finished=false, heartbeatTs=1553771825864]
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
>> at
>> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$0(ServerImpl.java:2663)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7181)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
>> at
>> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[jira] [Created] (IGNITE-11667) OPTIMISTIC REPEATEBLE_READ transactions does not guarantee transactional consistency in blinking node scenario

2019-04-01 Thread Dmitry Sherstobitov (JIRA)
Dmitry Sherstobitov created IGNITE-11667:


 Summary: OPTIMISTIC REPEATEBLE_READ transactions does not 
guarantee transactional consistency in blinking node scenario
 Key: IGNITE-11667
 URL: https://issues.apache.org/jira/browse/IGNITE-11667
 Project: Ignite
  Issue Type: Bug
Reporter: Dmitry Sherstobitov


Following scenario

Start cluster, load data
Start transactional loading (simple transfer task with PESSIMISTIC + 
OPTIMISTIC, REPEATABLE_READ transactions)
repeat 10 times:
  Stop one node, sleep 10 seconds, start again
  Wait for finish rebalance (LocalNodeMovingPartitionsCount == 0 for each 
cache/cache_group)

Validate that there is no conflicts in sum of fields (verify action for 
transfer task)

In case of OPTIMISTIC/REPEATABLE_READ transactions there is no guarantee that 
transactional consistence will be supported. (last validate step will be failed)




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11666) C++ : remove internal macro usages in the examples

2019-04-01 Thread Pavel Kuznetsov (JIRA)
Pavel Kuznetsov created IGNITE-11666:


 Summary: C++ : remove internal macro usages in the examples
 Key: IGNITE-11666
 URL: https://issues.apache.org/jira/browse/IGNITE-11666
 Project: Ignite
  Issue Type: Bug
  Components: examples, platforms
Reporter: Pavel Kuznetsov


Currently c++ examples are using internal macros. For example to specify how to 
serialize/deserialize user's c++ structs.

{code:c++ person.h}
 IGNITE_BINARY_TYPE_START(ignite::examples::Person)

typedef ignite::examples::Person Person;

IGNITE_BINARY_GET_TYPE_ID_AS_HASH(Person)
IGNITE_BINARY_GET_TYPE_NAME_AS_IS(Person)
IGNITE_BINARY_GET_FIELD_ID_AS_HASH
IGNITE_BINARY_IS_NULL_FALSE(Person)
IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(Person)
  //...
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: in-memory compression

2019-04-01 Thread Ilya Kasnacheev
Hello!

You could try running regular tests, such as PDS test suites, on Apache
Ignite TeamCity with this plugin enabled.

See if there are any new failures.

Regards,
-- 
Ilya Kasnacheev


пн, 1 апр. 2019 г. в 08:54, <999.comput...@gmail.com>:

> Hi developers,
>
> We have released TBLignite compression, an Ignite plugin that provides
> in-memory compression: http://tblcore.com/download/.
> Compression rates are similar to those of SQL Server 2016 columnar
> compression (10-20x) and our testing show significant performance
> improvements for datasets that are larger than the available amount of
> memory.
>
> Currently we are at version 0.1, but we are working on a thoroughly tested
> 1.0 release so everyone can scratch their compression itch. A couple of
> questions regarding this:
> 1) Are there any Ignite regression tests that you can recommend for 3rd
> party software? Basically we are looking for a way to test all the possible
> page formats that we need to support.
> 2) Once we release version 1.0, we would like TBLignite to be added to the
> Ignite "3rd party binary" page. Who decides what gets on this page?
> 3) And are there any acceptance criteria or tests that need to be passed?
>
> We would like to hear from you.
> Pascal Schuchhard
> TBLcore
>
>


[jira] [Created] (IGNITE-11664) [ML] Use Double.NaN as default values for missing values in Vector

2019-04-01 Thread Alexey Platonov (JIRA)
Alexey Platonov created IGNITE-11664:


 Summary: [ML] Use Double.NaN as default values for missing values 
in Vector
 Key: IGNITE-11664
 URL: https://issues.apache.org/jira/browse/IGNITE-11664
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Reporter: Alexey Platonov


Currently, we use 0.0 value for default values in vectors if a value is 
missing. But this way contradicts to preprocessors politics where for missing 
values Double.NaN is using. Moreover, Double.NaN is a more convenient value for 
missing feature values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11663) Dispose of copypaste code in org.apache.ignite.internal.processors.cache.persistence.wal.record.RecordTypes

2019-04-01 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-11663:
--

 Summary: Dispose of copypaste code in 
org.apache.ignite.internal.processors.cache.persistence.wal.record.RecordTypes
 Key: IGNITE-11663
 URL: https://issues.apache.org/jira/browse/IGNITE-11663
 Project: Ignite
  Issue Type: Improvement
 Environment: 
org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordPurpose
Reporter: Alexei Scherbakov
 Fix For: 2.8


We already have 
org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordPurpose for 
defining record relation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11662) Wrong classloader is used to unmarshal joining node data

2019-04-01 Thread Oleksii Mohylin (JIRA)
Oleksii Mohylin created IGNITE-11662:


 Summary: Wrong classloader is used to unmarshal joining node data
 Key: IGNITE-11662
 URL: https://issues.apache.org/jira/browse/IGNITE-11662
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.7
 Environment: Ignite 2.7
Karaf 4.2.0


Reporter: Oleksii Mohylin


When a cluster coordinator node is running in Karaf container it cannot accept 
joining requests from other nodes. Problem lies in unability to unmarshal 
joining node data in 
org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.validateNode()

This line
{code:java}
joiningNodeState = marsh.unmarshal((byte[]) discoData.joiningNodeData(), 
Thread.currentThread().getContextClassLoader());{code}
fails with
{noformat}
Error on unmarshalling discovery data from node 
10.0.2.15,127.0.0.1,172.17.0.1:47501: Failed to find class with given class 
loader for unmarshalling (make sure same versions of all classes are available 
on all nodes or enable peer-class-loading) 
[clsLdr=jdk.internal.loader.ClassLoaders$AppClassLoader@5c0369c4, 
cls=org.apache.ignite.internal.processors.cluster.DiscoveryDataClusterState]; 
node is not allowed to join{noformat}
Apparently problem is wrong classloader returned by
{code:java}
Thread.currentThread().getContextClassLoader()){code}
which is not the one created in IgniteAbstractOsgiContextActivator.start().

*Proposed fix:* 

use proper way of obtaining classloader:
{code:java}
 U.resolveClassLoader(ctx.config()){code}
Like in other places. i.e. in GridClusterStateProcessor.collectGridNodeData().

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: in-memory compression

2019-04-01 Thread Vladislav Pyatkov
Hi,

I looked you report in site and have some questions.

What means the thesis?
*Your dataset is fit for columnar compression, ea. repeating values and/or
a timeseries-like*
*dataset.*
As I undersent, you are overwrite file io factory (TBLigniteFileIoFactory).
But in this lavel available only bytes, not data entries.

What caused the performance drop to 31% in your test?

On Mon, Apr 1, 2019 at 8:54 AM <999.comput...@gmail.com> wrote:

> Hi developers,
>
> We have released TBLignite compression, an Ignite plugin that provides
> in-memory compression: http://tblcore.com/download/.
> Compression rates are similar to those of SQL Server 2016 columnar
> compression (10-20x) and our testing show significant performance
> improvements for datasets that are larger than the available amount of
> memory.
>
> Currently we are at version 0.1, but we are working on a thoroughly tested
> 1.0 release so everyone can scratch their compression itch. A couple of
> questions regarding this:
> 1) Are there any Ignite regression tests that you can recommend for 3rd
> party software? Basically we are looking for a way to test all the possible
> page formats that we need to support.
> 2) Once we release version 1.0, we would like TBLignite to be added to the
> Ignite "3rd party binary" page. Who decides what gets on this page?
> 3) And are there any acceptance criteria or tests that need to be passed?
>
> We would like to hear from you.
> Pascal Schuchhard
> TBLcore
>
>

-- 
Vladislav Pyatkov


[jira] [Created] (IGNITE-11661) Jdbc Thin: Implement tests for best effort affinity

2019-04-01 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11661:


 Summary: Jdbc Thin: Implement tests for best effort affinity
 Key: IGNITE-11661
 URL: https://issues.apache.org/jira/browse/IGNITE-11661
 Project: Ignite
  Issue Type: Task
  Components: jdbc
Reporter: Alexander Lapin


Test plan draft.
 # Сheck that requests go to the expected number of nodes for different 
combinations of conditions
 ** Transactional
 *** Without params
  Select
 * Different partition tree options(All/NONE/Group/CONST) produced by 
different query types.
  Dml
 * - // -
 *** With params
  - // -
 ** Non-Transactional
 *** - // -
 # Check that request/response functionality works fine if server response 
lacks partition result.
 # Check that partition result is supplied only in case of rendezvous affinity 
function without custom filters.
 # Check that best effort functionality works fine for different partitions 
count.
 # Сheck that a change in topology leads to jdbc thin affinity cache 
invalidation.
 ## Topology changed during partition result retrieval.
 ## Topology changed during cache distribution retrieval.
 ## Topology changed during best-effort-affinity-unrelated query.
 # Check that jdbc thin best effort affinity works fine if cache is full and 
new data still coming. For given case we probably should decrease cache 
boundaries.
 # Check that proper connection is used if set of nodes we are connected to and 
set of nodes derived from partitions
 ## Fully intersect;
 ## Partially intersect;
 ## Doesn't intersect, i.e.
||User Specified||Derived from partitons||
|host:port1 - > UUID1
 host:port2 -> UUID2|partition1 -> UUID3|

No intersection, so random connection should be used.
 # Check client reconnection after failure.
 # Check that jdbc thin best effort affinity skipped if it is switched off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)