[jira] [Created] (IGNITE-10488) ClusterCachesInfo.validateCacheGroupConfiguration should check cache compression configuration

2018-11-29 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-10488:
--

 Summary: ClusterCachesInfo.validateCacheGroupConfiguration should 
check cache compression configuration
 Key: IGNITE-10488
 URL: https://issues.apache.org/jira/browse/IGNITE-10488
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Vasiliy Sisko
 Fix For: 2.8






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Ignite 2.7. Last Mile

2018-11-29 Thread Nikolay Izhikov
Ivan. Please, provide a link for a ticket with NPE stack trace attached.

I've looked at IGNITE-10376 and can't see any attachments.

пт, 30 нояб. 2018 г., 10:14 Ivan Fedotov ivanan...@gmail.com:

> Igor,
> NPE is available in a full log, now I also attached it in the ticket.
>
> IGNITE-7953
> <
> https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8
> >
> was commited on the 15 October. I could not take a look on the
> testAtomicOnheapTwoBackupAsyncFullSync before this date, because the oldest
> test in the history on TC dates 12 November.
>
> So, I tested it locally and could not reproduce mentioned error.
>
> чт, 29 нояб. 2018 г. в 20:07, Seliverstov Igor :
>
> > Ivan,
> >
> > Could you provide a bit more details?
> >
> > I don't see any NPE among all available logs.
> >
> > I don't think the issue is caused by changes in scope of IGNITE-7953.
> > The test fails both before
> > <
> >
> https://ci.ignite.apache.org/viewLog.html?buildId=2318582=buildResultsDiv=IgniteTests24Java8_ContinuousQuery4#testNameId3300126853696550025
> > >
> >  and after
> > <
> >
> https://ci.ignite.apache.org/viewLog.html?buildId=2345403=buildResultsDiv=IgniteTests24Java8_ContinuousQuery4#testNameId3300126853696550025
> > >
> > the
> > commit was merged to master with almost the same stack trace.
> >
> > Regards,
> > Igor
> >
> > чт, 29 нояб. 2018 г. в 18:43, Yakov Zhdanov :
> >
> > > Vladimir, can you please take a look at
> > > https://issues.apache.org/jira/browse/IGNITE-10376?
> > >
> > > --Yakov
> > >
> >
>
>
> --
> Ivan Fedotov.
>
> ivanan...@gmail.com
>


Re: Apache Ignite 2.7. Last Mile

2018-11-29 Thread Ivan Fedotov
Igor,
NPE is available in a full log, now I also attached it in the ticket.

IGNITE-7953

was commited on the 15 October. I could not take a look on the
testAtomicOnheapTwoBackupAsyncFullSync before this date, because the oldest
test in the history on TC dates 12 November.

So, I tested it locally and could not reproduce mentioned error.

чт, 29 нояб. 2018 г. в 20:07, Seliverstov Igor :

> Ivan,
>
> Could you provide a bit more details?
>
> I don't see any NPE among all available logs.
>
> I don't think the issue is caused by changes in scope of IGNITE-7953.
> The test fails both before
> <
> https://ci.ignite.apache.org/viewLog.html?buildId=2318582=buildResultsDiv=IgniteTests24Java8_ContinuousQuery4#testNameId3300126853696550025
> >
>  and after
> <
> https://ci.ignite.apache.org/viewLog.html?buildId=2345403=buildResultsDiv=IgniteTests24Java8_ContinuousQuery4#testNameId3300126853696550025
> >
> the
> commit was merged to master with almost the same stack trace.
>
> Regards,
> Igor
>
> чт, 29 нояб. 2018 г. в 18:43, Yakov Zhdanov :
>
> > Vladimir, can you please take a look at
> > https://issues.apache.org/jira/browse/IGNITE-10376?
> >
> > --Yakov
> >
>


-- 
Ivan Fedotov.

ivanan...@gmail.com


Re: [VOTE] Apache Ignite 2.7.0 RC1

2018-11-29 Thread Seliverstov Igor
+1

пт, 30 нояб. 2018 г., 9:59 Nikolay Izhikov nizhi...@apache.org:

> Igniters,
>
> We've uploaded a 2.7.0 release candidate to
>
> https://dist.apache.org/repos/dist/dev/ignite/2.7.0-rc1/
>
> Git tag name is 2.7.0-rc1
>
> This release includes the following changes:
>
> Apache Ignite In-Memory Database and Caching Platform 2.7
> -
>
> Ignite:
> * Added experimental support for multi-version concurrency control with
> snapshot isolation
>   - available for both cache API and SQL
>   - use CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT to enable it
>   - not production ready, data consistency is not guaranteed in case of
> node failures
> * Implemented Transparent Data Encryption based on JKS certificates
> * Implemented Node.JS Thin Client
> * Implemented Python Thin Client
> * Implemented PHP Thin Client
> * Ignite start scripts now support Java 9 and higher
> * Added ability to set WAL history size in bytes
> * Added SslContextFactory.protocols and SslContextFactory.cipherSuites
> properties to control which SSL encryption algorithms can be used
> * Added JCache 1.1 compliance
> * Added IgniteCompute.withNoResultCache method with semantics similar to
> ComputeTaskNoResultCache annotation
> * Spring Data 2.0 is now supported in the separate module
> 'ignite-spring-data_2.0'
> * Added monitoring of critical system workers
> * Added ability to provide custom implementations of ExceptionListener for
> JmsStreamer
> * Ignite KafkaStreamer was upgraded to use new KafkaConsmer configuration
> * S3 IP Finder now supports subfolder usage instead of bucket root
> * Improved dynamic cache start speed
> * Improved checkpoint performance by decreasing mark duration.
> * Added ability to manage compression level for compressed WAL archives.
> * Added metrics for Entry Processor invocations.
> * Added JMX metrics: ClusterMetricsMXBean.getTotalBaselineNodes and
> ClusterMetricsMXBean.getActiveBaselineNodes
> * Node uptime metric now includes days count
> * Exposed info about thin client connections through JMX
> * Introduced new system property IGNITE_REUSE_MEMORY_ON_DEACTIVATE to
> enable reuse of allocated memory on node deactivation (disabled by default)
> * Optimistic transaction now will be properly rolled back if waiting too
> long for a new topology on remap
> * ScanQuery with setLocal flag now checks if the partition is actually
> present on local node
> * Improved cluster behaviour when a left node does not cause partition
> affinity assignment changes
> * Interrupting user thread during partition initialization will no longer
> cause node to stop
> * Fixed problem when partition lost event was not triggered if multiple
> nodes left cluster
> * Fixed massive node drop from the cluster on temporary network issues
> * Fixed service redeployment on cluster reactivation
> * Fixed client node stability under ZooKeeper discovery
> * Massive performance and stability improvements
>
> Ignite .Net:
> * Add .NET Core 2.1 support
> * Added thin client connection failover
>
> Ignite C++:
> * Implemented Thin Client with base cache operations
> * Implemented smart affinity routing for Thin Client to send requests
> directly to nodes containing data when possible
> * Added Clang compiler support
>
> SQL:
> * Added experimental support for fully ACID transactional SQL with the
> snapshot isolation:
>   - use CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT to enable it
>   - a transaction can be started through native API (IgniteTransactions),
> thin JDBC driver or ODBC driver
>   - not production ready, data consistency is not guaranteed in case of
> node failures
> * Added a set of system views located in "IGNITE" schema to view cluster
> information (NODES, NODE_ATTRIBUTES, NODE_METRICS, BASELINE_NODES)
> * Added ability to create predefined SQL schemas
> * Added GROUP_CONCAT function support
> * Added string length constraint
> * Custom Java objects are now inlined into primary and secondary indexes
> what may significantly improve performance when AFFINITY_KEY is used
> * Added timeout to fail query execution in case it cannot be mapped to
> topology
> * Restricted number of cores allocated for CREATE INDEX by default to 4 to
> avoid contention on index tree Fixed transaction hanging during runtime
> error on commit.
> * Fixed possible memory leak when result set size is multiple of the page
> size
> * Fixed situation when data may be returned from cache partitions in LOST
> state even when PartitionLossPolicy doesn't permit it
> * Fixed "Caches have distinct sets of data nodes" during SQL JOIN query
> execution between REPLICATED and PARTITIONED caches
> * Fixed wrong result for SQL queries when item size exceeds the page size
> * Fixed error during SQL query from client node with the local flag set to
> "true"
> * Fixed handling UUID as a column type
>
> JDBC:
> * Implemented DataSource interface for the thin driver
>
> ODBC:
> * Added streaming mode support
> * Fixed crash 

[VOTE] Apache Ignite 2.7.0 RC1

2018-11-29 Thread Nikolay Izhikov
Igniters,

We've uploaded a 2.7.0 release candidate to

https://dist.apache.org/repos/dist/dev/ignite/2.7.0-rc1/

Git tag name is 2.7.0-rc1

This release includes the following changes:

Apache Ignite In-Memory Database and Caching Platform 2.7
-

Ignite:
* Added experimental support for multi-version concurrency control with 
snapshot isolation
  - available for both cache API and SQL
  - use CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT to enable it
  - not production ready, data consistency is not guaranteed in case of node 
failures
* Implemented Transparent Data Encryption based on JKS certificates
* Implemented Node.JS Thin Client
* Implemented Python Thin Client
* Implemented PHP Thin Client
* Ignite start scripts now support Java 9 and higher
* Added ability to set WAL history size in bytes
* Added SslContextFactory.protocols and SslContextFactory.cipherSuites 
properties to control which SSL encryption algorithms can be used
* Added JCache 1.1 compliance
* Added IgniteCompute.withNoResultCache method with semantics similar to 
ComputeTaskNoResultCache annotation
* Spring Data 2.0 is now supported in the separate module 
'ignite-spring-data_2.0'
* Added monitoring of critical system workers
* Added ability to provide custom implementations of ExceptionListener for 
JmsStreamer
* Ignite KafkaStreamer was upgraded to use new KafkaConsmer configuration
* S3 IP Finder now supports subfolder usage instead of bucket root
* Improved dynamic cache start speed
* Improved checkpoint performance by decreasing mark duration.
* Added ability to manage compression level for compressed WAL archives.
* Added metrics for Entry Processor invocations.
* Added JMX metrics: ClusterMetricsMXBean.getTotalBaselineNodes and 
ClusterMetricsMXBean.getActiveBaselineNodes
* Node uptime metric now includes days count
* Exposed info about thin client connections through JMX
* Introduced new system property IGNITE_REUSE_MEMORY_ON_DEACTIVATE to enable 
reuse of allocated memory on node deactivation (disabled by default)
* Optimistic transaction now will be properly rolled back if waiting too long 
for a new topology on remap
* ScanQuery with setLocal flag now checks if the partition is actually present 
on local node
* Improved cluster behaviour when a left node does not cause partition affinity 
assignment changes
* Interrupting user thread during partition initialization will no longer cause 
node to stop
* Fixed problem when partition lost event was not triggered if multiple nodes 
left cluster
* Fixed massive node drop from the cluster on temporary network issues
* Fixed service redeployment on cluster reactivation
* Fixed client node stability under ZooKeeper discovery
* Massive performance and stability improvements

Ignite .Net:
* Add .NET Core 2.1 support
* Added thin client connection failover

Ignite C++:
* Implemented Thin Client with base cache operations
* Implemented smart affinity routing for Thin Client to send requests directly 
to nodes containing data when possible
* Added Clang compiler support

SQL:
* Added experimental support for fully ACID transactional SQL with the snapshot 
isolation:
  - use CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT to enable it
  - a transaction can be started through native API (IgniteTransactions), thin 
JDBC driver or ODBC driver
  - not production ready, data consistency is not guaranteed in case of node 
failures
* Added a set of system views located in "IGNITE" schema to view cluster 
information (NODES, NODE_ATTRIBUTES, NODE_METRICS, BASELINE_NODES)
* Added ability to create predefined SQL schemas
* Added GROUP_CONCAT function support
* Added string length constraint
* Custom Java objects are now inlined into primary and secondary indexes what 
may significantly improve performance when AFFINITY_KEY is used
* Added timeout to fail query execution in case it cannot be mapped to topology
* Restricted number of cores allocated for CREATE INDEX by default to 4 to 
avoid contention on index tree Fixed transaction hanging during runtime error 
on commit.
* Fixed possible memory leak when result set size is multiple of the page size
* Fixed situation when data may be returned from cache partitions in LOST state 
even when PartitionLossPolicy doesn't permit it
* Fixed "Caches have distinct sets of data nodes" during SQL JOIN query 
execution between REPLICATED and PARTITIONED caches
* Fixed wrong result for SQL queries when item size exceeds the page size
* Fixed error during SQL query from client node with the local flag set to 
"true"
* Fixed handling UUID as a column type

JDBC:
* Implemented DataSource interface for the thin driver

ODBC:
* Added streaming mode support
* Fixed crash in Linux when there are more than 1023 open file descriptors
* Fixed bug that prevented cursors on a server from being closed
* Fixed segmentation fault when reusing a closed connection

Web Console:
* Added new metrics: WAL and Data size on disk
* 

[jira] [Created] (IGNITE-10487) SQL: Make sure that partition pruning is integrated with DML properly

2018-11-29 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-10487:


 Summary: SQL: Make sure that partition pruning is integrated with 
DML properly
 Key: IGNITE-10487
 URL: https://issues.apache.org/jira/browse/IGNITE-10487
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Vladimir Ozerov
 Fix For: 2.8


DML re-use query execution logic for UPDATE and DELETE statements. Need to make 
sure that provided partition information is used properly in those queries for 
standard and MVCC modes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5031: IGNITE-9938 Updated ScalaTest version

2018-11-29 Thread akuznetsov-gridgain
Github user akuznetsov-gridgain closed the pull request at:

https://github.com/apache/ignite/pull/5031


---


Re: lost partition recovery with native persistence should not require resetLostPartitions call

2018-11-29 Thread novicr
Ivan,

Thanks for the pointer to discussion.  
It doesn't actually address the point around the need for
'resetLostPartitions()'.  It does point to a ticket that would fix the logic
in it when BLT is used.  My concern is that Ignite relies on the user to
call this method at all.  

Original message:
I was going over failure recovery scenarios, trying to understand logic 
behind lost partitions functionality.  In the case of native persistence, 
Ignite fully manages data persistence and availability.  If enough nodes in 
the cluster become unavailable resulting in partitions marked lost, Ignite 
keeps track of those partitions.  When nodes rejoin the cluster partitions 
are automatically discovered and loaded from disk.  This can be shown by the 
fact that data actually becomes available and can be retrieved using normal 
get/query api's.  However, lostPartitions() lists still contain some 
partitions that were previously lost (this seems like a bug) and Ignite 
expects user to manually mark partitions available by calling 
Ignite.resetLostPartitions() api.   

I found some discussion about issues with topology version handling in 
resetLostPartitions() in this ticket:  IGNITE-7832 
  , but it does not 
address the question, why user involvement is required at all. 

Seems there should, at least, be a configuration option to allow cache to 
self-recover once all partitions become available. 

This email was originally sent to the user group: 
lost-partition-recovery-with-native-persistence 

   





--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: [GitHub] ignite pull request #5528: Continuous query node restart fails when remote filter factory is defined

2018-11-29 Thread novicr
Looks like this issue has already been filed:
https://issues.apache.org/jira/browse/IGNITE-9181

The actual failing code: 
`assert rmtFilterFactory != null;` 

Looks like the filter factory is not propagated to the remote node. 

Note: When I use setRemoteFilter() (which is now decommissioned)
everything works as expected. 




--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[jira] [Created] (IGNITE-10486) Web console: wrong custom validation messages priority

2018-11-29 Thread Ilya Borisov (JIRA)
Ilya Borisov created IGNITE-10486:
-

 Summary: Web console: wrong custom validation messages priority
 Key: IGNITE-10486
 URL: https://issues.apache.org/jira/browse/IGNITE-10486
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Ilya Borisov
Assignee: Alexander Kalinin


What happens:
After IGNITE-9946 custom validation messages no longer override default 
messages.

What should happen:
Custom validation messages should override default messages. Issues fixed by 
IGNITE-9946 should remain fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10485) Ability to get know more about cluster state before NODE_JOINED event is fired cluster-wide

2018-11-29 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-10485:


 Summary: Ability to get know more about cluster state before 
NODE_JOINED event is fired cluster-wide
 Key: IGNITE-10485
 URL: https://issues.apache.org/jira/browse/IGNITE-10485
 Project: Ignite
  Issue Type: Improvement
  Components: cache
Reporter: Pavel Kovalenko
 Fix For: 2.8


Currently there are no good possibilities to get more knowledge about cluster 
before PME on node join start.

It might be usefult to do some pre-work (activate components if cluster is 
active, calculate baseline affinity, cleanup pds if baseline changed, etc.) 
before actual NODE_JOIN event is triggered cluster-wide and PME is started.
Such pre-work will significantly speed-up PME in case of node join.
Currently the only place where it can be done is during processing NodeAdded 
message on local joining node. 
But it's not a good idea, because it will freeze processing new discovery 
messages cluster-wide.

I see 2 ways how to implement it:

1) Introduce new intermediate state of node when it's discovered, but discovery 
event on node join is not triggered yet. This is right, but complicated change, 
because it requires revisiting joining process both in Tcp and Zk discovery 
protocols with extra failover scenarios.

2) Try to get this information and do pre-work before discovery manager start, 
using e.g. GridRestProcessor. This looks much simplier, but we can have some 
races there, when during pre-work cluster state has been changed (deactivation, 
baseline change). In this case we should rollback it or just stop/restart the 
node to avoid cluster instability. However these are rare scenarios in real 
world (e.g. start baseline node and start deactivation process right after node 
recovery is finished).

For starters we can expose baseline and cluster state in our rest endpoint and 
try to move out mentioned above pre-work things from PME. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Historical rebalance

2018-11-29 Thread Seliverstov Igor
Vladimir,

Look at my example:

One active transaction (Tx1 which does opX ops) while another tx (Tx2 which
does opX' ops) is finishes with uc4:

uc1--op1op2---uc2--op1'uc3--uc4---op3X-
Node1



uc1op1uc2op2uc3--op3--uc4cp1 Tx1 -
^ |  |
   |
 
| -Node2
  ^--
   |
  |
 |
uc1-uc2-uc3op1'uc4cp1
Tx2 -


state on Node2: tx1 -> op3 -> uc2
  cp1 [current=uc4, backpointer=uc2]

Here op2 was acknowledged by op3, op3 was applied before op1' (linearized
by WAL).

All nodes having uc4 must have op1' because uc4 cannot be get earlier than
prepare stage while prepare stage happens after all updates so *op1'
happens before uc4* regardless Tx2 was committed or rolled back.

This means *op2 happens before uc4* (uc4 cannot be earlier op2 on any node
because on Node2 op2 was already finished (acknowledged by op3) when op1'
happens)

That was my idea which easy to proof.

You used a different approach, but yes, It has to work.

чт, 29 нояб. 2018 г. в 22:19, Vladimir Ozerov :

> "If more recent WAL records will contain *ALL* updates of the transaction"
> -> "More recent WAL records will contain *ALL* updates of the transaction"
>
> On Thu, Nov 29, 2018 at 10:15 PM Vladimir Ozerov 
> wrote:
>
> > Igor,
> >
> > Yes, I tried to draw different configurations, and it really seems to
> > work, despite of being very hard to proof due to non-inituitive HB edges.
> > So let me try to spell the algorithm once again to make sure that we are
> on
> > the same page here.
> >
> > 1) There are two nodes - primary (P) and backup (B)
> > 2) There are three type of events: small transactions which possibly
> > increments update counter (ucX), one long active transaction which is
> split
> > into multiple operations (opX), and checkpoints (cpX)
> > 3) Every node always has current update counter. When transaction commits
> > it may or may not shift this counter further depending on whether there
> are
> > holes behind. But we have a strict rule that it always grow. Higher
> > coutners synchrnoizes with smaller. Possible cases:
> > uc1uc2uc3
> > uc1uc3--- // uc2 missing due to reorder, but is is ok
> >
> > 4) Operations within a single transaction is always applied sequentially,
> > and hence also have HB edge:
> > op1op2op3
> >
> > 5) When transaction operation happens, we save in memory current update
> > counter available at this moment. I.e. we have a map from transaction ID
> to
> > update counter which was relevant by the time last *completed* operation
> > *started*. This is very important thing - we remember the counter when
> > operation starts, but update the map only when it finishes. This is
> needed
> > for situation when update counter is bumber in the middle of a long
> > operation.
> > uc1op1op2uc2uc3op3
> > |  ||
> >uc1uc1  uc3
> >
> > state: tx1 -> op3 -> uc3
> >
> > 6) Whenever checkpoint occurs, we save two counters with: "current" and
> > "backpointer". The latter is the smallest update counter associated with
> > active transactions. If there are no active transactions, current update
> > counter is used.
> >
> > Example 1: no active transactions.
> > uc1cp1
> >  ^  |
> >  
> >
> > state: cp1 [current=uc1, backpointer=uc1]
> >
> > Example 2: one active transaction:
> >  ---
> >  | |
> > uc1op1uc2op2op3uc3cp1
> >^ |
> >--
> >
> > state: tx1 -> op3 -> uc2
> >cp1 [current=uc3, backpointer=uc2]
> >
> > 7) Historical rebalance:
> > 7.1) Demander finds latest checkpoint, get it's backpointer and sends it
> > to supplier.
> > 7.2) Supplier finds earliest checkpoint where [supplier(current) <=
> > demander(backpointer)]
> > 7.3) Supplier reads checkpoint backpointer and finds associated WAL
> > record. This is where we start.
> >
> > So in terms of WAL we have: supplier[uc_backpointer <- cp(uc_current <=
> > demanter_uc_backpointer)] <- demander[uc_backpointer <- cp(last)]
> >
> > Now the most important - why it works :-)
> > 1) Transaction opeartions are sequential, so at the time of crash nodes
> > are *at most one operation ahead *each other
> > 2) Demander goes to the past and finds update counter which was current
> at
> > the time of last TX completed operation
> > 3) Supplier goes 

Re: Thin clients all in one

2018-11-29 Thread Alexey Kosenchuk

Hi Stepan,

pls check the Ignite cfg you use - see the comments in the jira.

Also, the examples executors (including AuthTlsExample) are included 
into NodeJS test suite in TeamCity which run periodically and 
successfully passed. Eg. the latest one: 
https://ci.ignite.apache.org/viewLog.html?buildId=2426645=buildResultsDiv=IgniteTests24Java8_ThinClientNodeJs


Regards,
-Alexey

28.11.2018 17:08, Stepan Pilschikov пишет:

Hello again

If NodeJS sources found that example AuthTlsExample.js throwing exception
during execution
Output and grid configuration in
https://issues.apache.org/jira/browse/IGNITE-10447

Can someone have a look at it?

вс, 25 нояб. 2018 г. в 19:11, Stepan Pilschikov :


My bad,
You right

вс, 25 нояб. 2018 г. в 05:37, Dmitry Melnichuk <
dmitry.melnic...@nobitlost.com>:


Stepan,

AFAIK Map type did always behave correctly on client side, as it does
now. This is a corresponding piece of my test suite:

```
def test_put_get_map(client):

  cache = client.get_or_create_cache('test_map_cache')

  cache.put(
  'test_map',
  (
  MapObject.HASH_MAP,
  {
  (123, IntObject): 'test_data',
  456: ((1, [456, 'inner_test_string', 789]),
CollectionObject),
  'test_key': 32.4,
  }
  ),
  value_hint=MapObject
  )
  value = cache.get('test_map')
  assert value == (MapObject.HASH_MAP, {
  123: 'test_data',
  456: (1, [456, 'inner_test_string', 789]),
  'test_key': 32.4,
  })

```

Or is there another, more specific problem with maps?

Dmitry

On 11/25/18 3:56 AM, Stepan Pilschikov wrote:

Dmitry,

Great, checked, now all things woks well
Hope that Igor made review for this PR

But what about Maps? Looks like different ticket? or it can be done in

same

ticket scope?

пт, 23 нояб. 2018 г. в 23:58, Dmitry Melnichuk <
dmitry.melnic...@nobitlost.com>:


Stepan,

Sorry, I forgot to update from upstream prior to start working on this
issue, and thus brought a regression. My bad. Just merged with the
latest master. Please, check it out again.

Dmitry

On 11/24/18 1:37 AM, Stepan Pilschikov wrote:

Dmitry,

Iv checked and its actually work
But a specially in this branch i found another bug
Please look at my last comment:




https://issues.apache.org/jira/browse/IGNITE-10358?focusedCommentId=16697285=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16697285


пт, 23 нояб. 2018 г. в 01:21, Dmitry Melnichuk <
dmitry.melnic...@nobitlost.com>:


Stepan,

Thank you for your great job in evaluating Python thin client, as

well

as other thin clients.

There was indeed a bug in Python client regarding the handling of

type

hints in Collection type. I created a fix and did a PR under
IGNITE-10358 task, but the same PR is also fixes the problem in
IGNITE-10230 task.

As of handling the type mapping in gists you provided, I left

comments

on both tasks.

Dmitry

On 11/21/18 6:37 PM, Stepan Pilschikov wrote:

Dmitry, Alexey

Thank you for help, this answers help me a lot with understanding

how

clients are work

Not so long time ago i met problem which is have expected behavior,

but

its

may broke some workflows in future for some users

Its all about not specified data types in collections and map's
All description and examples in
https://issues.apache.org/jira/browse/IGNITE-10358

Dmitry, can you have a quick look at it and maybe in future somehow

fix

it?


пт, 26 окт. 2018 г. в 19:05, Dmitry Melnichuk <
dmitry.melnic...@nobitlost.com>:


Stepan!

TL/DR: what you got with Python client in your gist is an intended
behavior.

Explanation: As per docs, Object array contains of type ID (which

is

defaults to -1) and an array of objects.








https://apacheignite.readme.io/docs/binary-client-protocol-data-format#section-object-array


Your gist might be fixed accordingly:

```
from pyignite import Client
from pyignite.datatypes import *

OBJECT_ARRAY_TYPE_ID = -1
OBJECT_ARRAY_CONTENTS = [1, 2]

client = Client()
client.connect('127.0.0.1', 10800)
cache = client.get_or_create_cache("PY_OBJECT_ARRAY")
cache.put(
 1,
 (OBJECT_ARRAY_TYPE_ID, OBJECT_ARRAY_CONTENTS),
 key_hint=IntObject,
 value_hint=ObjectArrayObject,
)

# Python  output: print(cache.get(1))
# (-1, [1, 2])
```

The situation is similar with Map and Collection, they have types.

Types

and type IDs are mostly useless in Python, but I left them for
interoperability reasons. If you think I should kick them out, just

let

me know.

The usage of these 3 data types is documented and tested. You can

refer

to the table in “Data types” section:








https://apache-ignite-binary-protocol-client.readthedocs.io/en/latest/datatypes/parsers.html


The tests are here:








https://github.com/apache/ignite/blob/master/modules/platforms/python/tests/test_datatypes.py#L116-L124


On 10/26/18 11:57 PM, Stepan Pilschikov wrote:

Hi, everyone


Re: IEP-24: SQL Partition Pruning

2018-11-29 Thread Denis Magda
Vladimir, thanks for the extensive description. Everything is clear for me
from a user perspective.

Hope Sergi, Taras, and other SQL experts would share their feedback.

--
Denis

On Mon, Nov 26, 2018 at 6:33 AM Vladimir Ozerov 
wrote:

> Igniters,
>
> I prepared and IEP-24 [1] for so-called "partition pruning" optimization
> for our SQL engine, which will allow us to determine target nodes
> containing query data prior to query execution. We already use this
> optimization for very simple scenarios - only one expression, no JOINs.
>
> The goals of this IEP:
> 1) Extract partitions from complex expressions
> 2) Support common JOIN scenarios
> 3) Allow calculation of target partitions on thin client to allow more
> efficient request routing
> 4) Introduce monitoring capabilities to let user know whether optimization
> is applicable to specific query or not
>
> IEP covers several complex architecture questions, which will be finalized
> during actual implementation:
> 1) Rules for partition extraction from complex AND and OR expressions, as
> well as from "IN (...)", "BETWEEN ... AND ...", and range expressions
> 2) Rules for partition extraction from JOINs
> 3) Several subquery rewrite rules which will allow to apply optimization to
> certain subqueries.
>
> Also this optimization will introduce some basic building blocks
> ("co-location tree") for further improvements of our distributed joins.
>
> Will appreciate your review and comments.
>
> Vladimir.
>
> [1]
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-24%3A+SQL+Partition+Pruning
>


Re: [Result][VOTE] Creation dedicated list for github notifiacations

2018-11-29 Thread Denis Magda
A request has been submitted.

--
Denis

On Thu, Nov 29, 2018 at 11:45 AM Dmitriy Pavlov  wrote:

> Denis, could you please create a new list for Apache Ignite, e.g.
> notificati...@ignite.apache.org  ?
>
> Only PMC Chair can create a list https://infra.apache.org/
>
> Create list feature has been restricted to ASF members and PMC chairs only.
> Thank you in advance.
>
> Sincerely,
> Dmitriy Pavlov
>
> чт, 29 нояб. 2018 г. в 21:44, Eduard Shangareev <
> eduard.shangar...@gmail.com>:
>
>> Igniters,
>> The result is successful.
>>
>> No "-1".
>> 11 "+1".
>> 2 "0".
>>
>> Vote thread:
>>
>> http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Creation-dedicated-list-for-github-notifiacations-td38485.html
>>
>


Re: [Result][VOTE] Creation dedicated list for github notifiacations

2018-11-29 Thread Dmitriy Pavlov
Denis, could you please create a new list for Apache Ignite, e.g.
notificati...@ignite.apache.org  ?

Only PMC Chair can create a list https://infra.apache.org/

Create list feature has been restricted to ASF members and PMC chairs only.
Thank you in advance.

Sincerely,
Dmitriy Pavlov

чт, 29 нояб. 2018 г. в 21:44, Eduard Shangareev :

> Igniters,
> The result is successful.
>
> No "-1".
> 11 "+1".
> 2 "0".
>
> Vote thread:
>
> http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Creation-dedicated-list-for-github-notifiacations-td38485.html
>


Re: Historical rebalance

2018-11-29 Thread Vladimir Ozerov
Igor,

Yes, I tried to draw different configurations, and it really seems to work,
despite of being very hard to proof due to non-inituitive HB edges. So let
me try to spell the algorithm once again to make sure that we are on the
same page here.

1) There are two nodes - primary (P) and backup (B)
2) There are three type of events: small transactions which possibly
increments update counter (ucX), one long active transaction which is split
into multiple operations (opX), and checkpoints (cpX)
3) Every node always has current update counter. When transaction commits
it may or may not shift this counter further depending on whether there are
holes behind. But we have a strict rule that it always grow. Higher
coutners synchrnoizes with smaller. Possible cases:
uc1uc2uc3
uc1uc3--- // uc2 missing due to reorder, but is is ok

4) Operations within a single transaction is always applied sequentially,
and hence also have HB edge:
op1op2op3

5) When transaction operation happens, we save in memory current update
counter available at this moment. I.e. we have a map from transaction ID to
update counter which was relevant by the time last *completed* operation
*started*. This is very important thing - we remember the counter when
operation starts, but update the map only when it finishes. This is needed
for situation when update counter is bumber in the middle of a long
operation.
uc1op1op2uc2uc3op3
|  ||
   uc1uc1  uc3

state: tx1 -> op3 -> uc3

6) Whenever checkpoint occurs, we save two counters with: "current" and
"backpointer". The latter is the smallest update counter associated with
active transactions. If there are no active transactions, current update
counter is used.

Example 1: no active transactions.
uc1cp1
 ^  |
 

state: cp1 [current=uc1, backpointer=uc1]

Example 2: one active transaction:
 ---
 | |
uc1op1uc2op2op3uc3cp1
   ^ |
   --

state: tx1 -> op3 -> uc2
   cp1 [current=uc3, backpointer=uc2]

7) Historical rebalance:
7.1) Demander finds latest checkpoint, get it's backpointer and sends it to
supplier.
7.2) Supplier finds earliest checkpoint where [supplier(current) <=
demander(backpointer)]
7.3) Supplier reads checkpoint backpointer and finds associated WAL record.
This is where we start.

So in terms of WAL we have: supplier[uc_backpointer <- cp(uc_current <=
demanter_uc_backpointer)] <- demander[uc_backpointer <- cp(last)]

Now the most important - why it works :-)
1) Transaction opeartions are sequential, so at the time of crash nodes are *at
most one operation ahead *each other
2) Demander goes to the past and finds update counter which was current at
the time of last TX completed operation
3) Supplier goes to the closest checkpoint in the past where this update
counter either doesn't exist or just appeared
4) Transaction cannot be committed on supplier at this checkpoint, as it
would violate UC happens-before rule
5) Tranasction may have not started yet on supplier at this point. If more
recent WAL records will contain *ALL* updates of the transaction
6) Transaction may exist on supplier at this checkpoint. Thanks to p.1 we
must skip at most one operation. Jump back through supplier's checkpoint
backpointer is guaranteed to do this.

Igor, do we have the same understanding here?

Vladimir.

On Thu, Nov 29, 2018 at 2:47 PM Seliverstov Igor 
wrote:

> Ivan,
>
> different transactions may be applied in different order on backup nodes.
> That's why we need an active tx set
> and some sorting by their update times. The idea is to identify a point in
> time which starting from we may lost some updates.
> This point:
>1) is the last acknowledged by all backups (including possible further
> demander) update on timeline;
>2) have a specific update counter (aka back-counter) which we going to
> start iteration from.
>
> After additional thinking on, I've identified a rule:
>
> There is two fences:
>   1) update counter (UC) - this means that all updates, with less UC than
> applied one, was applied on a node, having this UC.
>   2) update in scope of TX - all updates are applied one by one
> sequentially, this means that the fact of update guaranties the previous
> update (statement) was finished on all TX participants.
>
> Сombining them, we can say the next:
>
> All updates, that was acknowledged at the time the last update of tx, which
> updated UC, applied, are guaranteed to be presented on a node having such
> UC
>
> We can use this rule to find an iterator start pointer.
>
> ср, 28 нояб. 2018 г. в 20:26, Павлухин Иван :
>
> > Guys,
> >
> > Another one idea. We can introduce additional update counter which is
> > incremented by MVCC 

Re: Historical rebalance

2018-11-29 Thread Vladimir Ozerov
"If more recent WAL records will contain *ALL* updates of the transaction"
-> "More recent WAL records will contain *ALL* updates of the transaction"

On Thu, Nov 29, 2018 at 10:15 PM Vladimir Ozerov 
wrote:

> Igor,
>
> Yes, I tried to draw different configurations, and it really seems to
> work, despite of being very hard to proof due to non-inituitive HB edges.
> So let me try to spell the algorithm once again to make sure that we are on
> the same page here.
>
> 1) There are two nodes - primary (P) and backup (B)
> 2) There are three type of events: small transactions which possibly
> increments update counter (ucX), one long active transaction which is split
> into multiple operations (opX), and checkpoints (cpX)
> 3) Every node always has current update counter. When transaction commits
> it may or may not shift this counter further depending on whether there are
> holes behind. But we have a strict rule that it always grow. Higher
> coutners synchrnoizes with smaller. Possible cases:
> uc1uc2uc3
> uc1uc3--- // uc2 missing due to reorder, but is is ok
>
> 4) Operations within a single transaction is always applied sequentially,
> and hence also have HB edge:
> op1op2op3
>
> 5) When transaction operation happens, we save in memory current update
> counter available at this moment. I.e. we have a map from transaction ID to
> update counter which was relevant by the time last *completed* operation
> *started*. This is very important thing - we remember the counter when
> operation starts, but update the map only when it finishes. This is needed
> for situation when update counter is bumber in the middle of a long
> operation.
> uc1op1op2uc2uc3op3
> |  ||
>uc1uc1  uc3
>
> state: tx1 -> op3 -> uc3
>
> 6) Whenever checkpoint occurs, we save two counters with: "current" and
> "backpointer". The latter is the smallest update counter associated with
> active transactions. If there are no active transactions, current update
> counter is used.
>
> Example 1: no active transactions.
> uc1cp1
>  ^  |
>  
>
> state: cp1 [current=uc1, backpointer=uc1]
>
> Example 2: one active transaction:
>  ---
>  | |
> uc1op1uc2op2op3uc3cp1
>^ |
>--
>
> state: tx1 -> op3 -> uc2
>cp1 [current=uc3, backpointer=uc2]
>
> 7) Historical rebalance:
> 7.1) Demander finds latest checkpoint, get it's backpointer and sends it
> to supplier.
> 7.2) Supplier finds earliest checkpoint where [supplier(current) <=
> demander(backpointer)]
> 7.3) Supplier reads checkpoint backpointer and finds associated WAL
> record. This is where we start.
>
> So in terms of WAL we have: supplier[uc_backpointer <- cp(uc_current <=
> demanter_uc_backpointer)] <- demander[uc_backpointer <- cp(last)]
>
> Now the most important - why it works :-)
> 1) Transaction opeartions are sequential, so at the time of crash nodes
> are *at most one operation ahead *each other
> 2) Demander goes to the past and finds update counter which was current at
> the time of last TX completed operation
> 3) Supplier goes to the closest checkpoint in the past where this update
> counter either doesn't exist or just appeared
> 4) Transaction cannot be committed on supplier at this checkpoint, as it
> would violate UC happens-before rule
> 5) Tranasction may have not started yet on supplier at this point. If more
> recent WAL records will contain *ALL* updates of the transaction
> 6) Transaction may exist on supplier at this checkpoint. Thanks to p.1 we
> must skip at most one operation. Jump back through supplier's checkpoint
> backpointer is guaranteed to do this.
>
> Igor, do we have the same understanding here?
>
> Vladimir.
>
> On Thu, Nov 29, 2018 at 2:47 PM Seliverstov Igor 
> wrote:
>
>> Ivan,
>>
>> different transactions may be applied in different order on backup nodes.
>> That's why we need an active tx set
>> and some sorting by their update times. The idea is to identify a point in
>> time which starting from we may lost some updates.
>> This point:
>>1) is the last acknowledged by all backups (including possible further
>> demander) update on timeline;
>>2) have a specific update counter (aka back-counter) which we going to
>> start iteration from.
>>
>> After additional thinking on, I've identified a rule:
>>
>> There is two fences:
>>   1) update counter (UC) - this means that all updates, with less UC than
>> applied one, was applied on a node, having this UC.
>>   2) update in scope of TX - all updates are applied one by one
>> sequentially, this means that the fact of update guaranties the previous
>> update (statement) was finished on all TX participants.
>>
>> Сombining them, we can say the next:

[Result][VOTE] Creation dedicated list for github notifiacations

2018-11-29 Thread Eduard Shangareev
Igniters,
The result is successful.

No "-1".
11 "+1".
2 "0".

Vote thread:
http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Creation-dedicated-list-for-github-notifiacations-td38485.html


[GitHub] ignite pull request #5534: IGNITE-10484 Fixed activate/deactivate hang

2018-11-29 Thread agoncharuk
GitHub user agoncharuk opened a pull request:

https://github.com/apache/ignite/pull/5534

IGNITE-10484 Fixed activate/deactivate hang



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10484

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5534.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5534


commit 891f2cac819239546526766b2dc52fabe0a12e7c
Author: Alexey Goncharuk 
Date:   2018-11-29T18:12:47Z

IGNITE-10484 Fixed activate/deactivate hang




---


[GitHub] asfgit closed pull request #84: IGNITE-9542 new invocation history implementation

2018-11-29 Thread GitBox
asfgit closed pull request #84: IGNITE-9542 new invocation history 
implementation
URL: https://github.com/apache/ignite-teamcity-bot/pull/84
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (IGNITE-10484) Activate/deactivate cluster suite hangs sporadically

2018-11-29 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-10484:
-

 Summary: Activate/deactivate cluster suite hangs sporadically
 Key: IGNITE-10484
 URL: https://issues.apache.org/jira/browse/IGNITE-10484
 Project: Ignite
  Issue Type: Bug
Reporter: Alexey Goncharuk
Assignee: Alexey Goncharuk


I saw the following thread dump on TC (only relevant parts are kept):
{code}
"exchange-worker-#10918%cache.IgniteClusterActivateDeactivateTest0%" #13121 
prio=5 os_prio=0 tid=0x7f0720137800 nid=0xbcf runnable [0x7f0b46f66000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
- locked <0xdf6b3f88> (a java.lang.Object)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.safeTcpHandshake(TcpCommunicationSpi.java:3676)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3323)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2991)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2872)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2715)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2674)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1655)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1706)
at 
org.apache.ignite.internal.processors.cluster.ClusterProcessor.sendDiagnosticMessage(ClusterProcessor.java:614)
at 
org.apache.ignite.internal.processors.cluster.ClusterProcessor.requestDiagnosticInfo(ClusterProcessor.java:556)
at 
org.apache.ignite.internal.IgniteDiagnosticPrepareContext.send(IgniteDiagnosticPrepareContext.java:131)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.dumpDebugInfo(GridCachePartitionExchangeManager.java:1914)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2914)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2721)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)

...

"start-node-3" #13223 prio=5 os_prio=0 tid=0x7f08a8001800 nid=0xc30 waiting 
on condition [0x7f0a577f5000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1099)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2040)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1732)
- locked <0x959ae1d0> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:959)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:900)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:888)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:854)
at 
org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTest.lambda$testConcurrentJoinAndActivate$4(IgniteClusterActivateDeactivateTest.java:601)
at 
org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTest$$Lambda$183/97479.call(Unknown
 Source)
at 
org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:84)

...

"grid-nio-worker-tcp-comm-3-#11059%cache.IgniteClusterActivateDeactivateTest5%" 
#13297 prio=5 os_prio=0 tid=0x7f08f809f000 nid=0xc83 waiting on condition 
[0x7f0a4688d000]
   

Re: Change code style inspections to use red mark for those used at Teamcity build checks (IGNITE-10450)

2018-11-29 Thread Павлухин Иван
Hi Oleg,

Thank you for the story. Now I see the concerns better.
чт, 29 нояб. 2018 г. в 18:45, oignatenko :
>
> Hi Ivan,
>
>
> Павлухин Иван wrote
> > P.S. Did not really get what was the main problem related to
> > IGNITE-10399 [1]. I suppose it should go: simple problem -- fast fix.
>
> Well, speaking of IGNITE-10399 it turned out that problem is indeed simple
> and fix was fast, just as you wrote.
>
> What looked wrong is that discovery of the problem took quite long and it
> turned out to be too much effort for such a simple problem and fix.
>
> To start with, it is essentially impossible to find out when coding. You can
> see why is that at screen shot reproducing how missing import looked like in
> the file that caused this issue (attached to IGNITE-10450):
> https://issues.apache.org/jira/secure/attachment/12950020/IDEA.inspections.TC-bot.obscured.png
>
> Please notice how mark denoting troublesome inspection violation is buried
> among non-critical ones which makes it very easy to miss when coding.
>
> Another problem is, the only reliable way to tell if there is an issue is to
> run Teamcity checks. Which means one needs to push and wait for quite a
> while to just find out if there is a problem or not (note by the way how
> this way breaks if one is working offline or if Teamcity server is down /
> busy for some reason).
>
> And this is not yet the end - even when you learn that some inspection
> failed somewhere there is no easy way to just get back to your code and find
> it - because as you can see from above screen shot the problem is obscured
> too much to notice. One needs to get to Teamcity report and read details of
> the failure to find out where to look for it.
>
> Above sounds a bit too much for finding and fixing a simple missing import
> isn't it?
>
> Do we really need to push, launch TC checks, wait for completion and read
> detailed report to simply find out an issue that was already reported and
> highlighted in IDE before all that, only hard to notice.
>
> ---
>
> When I realized that I thought maybe I would prefer if IDE could highlight
> such inspections more prominently to let me find it just when I introduce
> them during coding, without cumbersome messing with Teamcity checks and
> reports and I went to Maxim and he helped me find how this could be done.
>
> And after checking how it would work I created IGNITE-10450 for us to try
> this way - because it looked so much more convenient compared to what I
> observed in IGNITE-10399.
>
> regards Oleg
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


[GitHub] ignite pull request #5411: IGNITE-8379 Add maven-surefire-plugin support for...

2018-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5411


---


[jira] [Created] (IGNITE-10483) MVCC: Enlist request deserialization failure causes grid hanging.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10483:
-

 Summary: MVCC: Enlist request deserialization failure causes grid 
hanging.
 Key: IGNITE-10483
 URL: https://issues.apache.org/jira/browse/IGNITE-10483
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Andrew Mashenkov


Looks like remote serialization issues are not propagated back to near node and 
user request hangs forever.



We should add error handling for all mvcc Enlist requests into 
GridCacheIoManager

 
{noformat}
[19:11:49]W: [org.apache.ignite:ignite-core] class 
org.apache.ignite.IgniteCheckedException: Failed to send response to node. 
Unsupported direct type [message=GridNearTxEnlistRequest [threadId
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processFailedMessage(GridCacheIoManager.java:1048)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:582)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:383)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:309)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:100)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:299)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1568)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1196)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1092)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
java.lang.Thread.run(Thread.java:748)
[19:11:49]W: [org.apache.ignite:ignite-core] Caused by: class 
org.apache.ignite.IgniteCheckedException: Failed to unmarshal object with 
optimized marshaller
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9997)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10049)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridInvokeValue.finishUnmarshal(GridInvokeValue.java:108)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistRequest.finishUnmarshal(GridNearTxEnlistRequest.java:359)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.unmarshall(GridCacheIoManager.java:1538)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
[19:11:49]W: [org.apache.ignite:ignite-core] ... 11 more
[19:11:49]W: [org.apache.ignite:ignite-core] Caused by: class 
org.apache.ignite.binary.BinaryObjectException: Failed to unmarshal object with 
optimized marshaller
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1789)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:101)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:81)
[19:11:49]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9991)
[19:11:49]W: 

Re: Apache Ignite 2.7. Last Mile

2018-11-29 Thread Seliverstov Igor
Ivan,

Could you provide a bit more details?

I don't see any NPE among all available logs.

I don't think the issue is caused by changes in scope of IGNITE-7953.
The test fails both before

 and after

the
commit was merged to master with almost the same stack trace.

Regards,
Igor

чт, 29 нояб. 2018 г. в 18:43, Yakov Zhdanov :

> Vladimir, can you please take a look at
> https://issues.apache.org/jira/browse/IGNITE-10376?
>
> --Yakov
>


Re: Apache Ignite 2.7. Last Mile

2018-11-29 Thread Alexey Goncharuk
Denis,

It looks like the failing test is related to existing ATOMIC caches but it
was broken by the MVCC commit, so it is a regression. Let's wait for
Vladimir Ozerov or Igor Seliverstov to comment.

чт, 29 нояб. 2018 г. в 19:32, Nikolay Izhikov :

> Hello, Denis.
>
> Nothing blocks now.
> I preparing vote artifacts right now.
> There are some issues with TC tasks.
> I think I resolve them in a couple of hours.
>
> чт, 29 нояб. 2018 г., 19:30 Denis Magda dma...@apache.org:
>
> > I think that it's not a blocker since MVCC is in the beta state and some
> of
> > the APIs might not work well with it yet.
> >
> > Apart from that, are we done with the stabilization and ready to start
> the
> > vote? What blocks us from that?
> >
> > --
> > Denis
> >
> >
> > On Thu, Nov 29, 2018 at 7:43 AM Yakov Zhdanov 
> wrote:
> >
> > > Vladimir, can you please take a look at
> > > https://issues.apache.org/jira/browse/IGNITE-10376?
> > >
> > > --Yakov
> > >
> >
>


[jira] [Created] (IGNITE-10482) Print stacktrace of the blocked thread in failure handler.

2018-11-29 Thread Roman Kondakov (JIRA)
Roman Kondakov created IGNITE-10482:
---

 Summary: Print stacktrace of the blocked thread in failure handler.
 Key: IGNITE-10482
 URL: https://issues.apache.org/jira/browse/IGNITE-10482
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Roman Kondakov


If a blocked thread is detected by the error handler, only a detector's thread 
is printed to the log. But it is much more informative to print a stacktrace of 
the blocked thread.

As shown below, {{disco-event-worker}} thread has detected blocked 
{{sys-stripe-0}} thread. But stacktrace of {{sys-stripe-0}} is not printed 
despite the fact it is of particular interest.

{noformat}
[2018-11-29 
18:50:36,925][ERROR][disco-event-worker-#37%continuous.CacheContinuousQueryOrderingEventTest0%][G]
 Blocked system-critical thread has been detected. This can lead to 
cluster-wide undefined behaviour [threadName=sys-stripe-0, blockedFor=10s]
[2018-11-29 18:50:36,926][WARN 
][disco-event-worker-#37%continuous.CacheContinuousQueryOrderingEventTest0%][G] 
Thread 
[name="sys-stripe-0-#1%continuous.CacheContinuousQueryOrderingEventTest0%", 
id=13, state=WAITING, blockCnt=9, waitCnt=3704]

[2018-11-29 
18:50:36,927][ERROR][disco-event-worker-#37%continuous.CacheContinuousQueryOrderingEventTest0%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=SingletonSet [SYSTEM_WORKER_BLOCKED]]], 
failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class 
o.a.i.IgniteException: GridWorker [name=sys-stripe-0, 
igniteInstanceName=continuous.CacheContinuousQueryOrderingEventTest0, 
finished=false, heartbeatTs=1543506626138]]]
class org.apache.ignite.IgniteException: GridWorker [name=sys-stripe-0, 
igniteInstanceName=continuous.CacheContinuousQueryOrderingEventTest0, 
finished=false, heartbeatTs=1543506626138]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1833)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1828)
at 
org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
at 
org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryWorker.body(GridDiscoveryManager.java:2812)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Ignite 2.7. Last Mile

2018-11-29 Thread Nikolay Izhikov
Hello, Denis.

Nothing blocks now.
I preparing vote artifacts right now.
There are some issues with TC tasks.
I think I resolve them in a couple of hours.

чт, 29 нояб. 2018 г., 19:30 Denis Magda dma...@apache.org:

> I think that it's not a blocker since MVCC is in the beta state and some of
> the APIs might not work well with it yet.
>
> Apart from that, are we done with the stabilization and ready to start the
> vote? What blocks us from that?
>
> --
> Denis
>
>
> On Thu, Nov 29, 2018 at 7:43 AM Yakov Zhdanov  wrote:
>
> > Vladimir, can you please take a look at
> > https://issues.apache.org/jira/browse/IGNITE-10376?
> >
> > --Yakov
> >
>


Re: Apache Ignite 2.7. Last Mile

2018-11-29 Thread Denis Magda
I think that it's not a blocker since MVCC is in the beta state and some of
the APIs might not work well with it yet.

Apart from that, are we done with the stabilization and ready to start the
vote? What blocks us from that?

--
Denis


On Thu, Nov 29, 2018 at 7:43 AM Yakov Zhdanov  wrote:

> Vladimir, can you please take a look at
> https://issues.apache.org/jira/browse/IGNITE-10376?
>
> --Yakov
>


[GitHub] ignite pull request #5329: IGNITE-10108: Refactored a test to avoid passing ...

2018-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5329


---


[jira] [Created] (IGNITE-10481) [ML] Examples of stacking usage

2018-11-29 Thread Yury Babak (JIRA)
Yury Babak created IGNITE-10481:
---

 Summary: [ML] Examples of stacking usage
 Key: IGNITE-10481
 URL: https://issues.apache.org/jira/browse/IGNITE-10481
 Project: Ignite
  Issue Type: Sub-task
  Components: ml
Reporter: Yury Babak






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5533: IGNITE-10289: Import models from XGBoost

2018-11-29 Thread dmitrievanthony
GitHub user dmitrievanthony opened a pull request:

https://github.com/apache/ignite/pull/5533

IGNITE-10289: Import models from XGBoost



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10289

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5533.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5533


commit 1991fea7d9f83f78f4accb62aacfc8f81406febe
Author: dmitrievanthony 
Date:   2018-11-29T13:38:18Z

IGNITE-10289: First version of XGBoost model parser.

commit e23241ee42146b98140f8071f300fd37c6ce7c01
Author: dmitrievanthony 
Date:   2018-11-29T15:51:38Z

IGNITE-10289: Add licence header into XGBoostModel.g4.

commit 4b004fc6b8400f70d7b4665c488602abf524f827
Author: dmitrievanthony 
Date:   2018-11-29T16:11:28Z

IGNITE-10289: Add example for XGBoostModelParser.




---


[jira] [Created] (IGNITE-10480) [ML] Stacking for training and inference

2018-11-29 Thread Yury Babak (JIRA)
Yury Babak created IGNITE-10480:
---

 Summary: [ML] Stacking for training and inference
 Key: IGNITE-10480
 URL: https://issues.apache.org/jira/browse/IGNITE-10480
 Project: Ignite
  Issue Type: New Feature
  Components: ml
Reporter: Yury Babak
Assignee: Artem Malykh


Stacking is an ensemble learning technique that combines multiple 
classification or regression models via a meta-classifier or a meta-regressor. 
The base level models are trained based on a complete training set, then the 
meta-model is trained on the outputs of the base level model as features.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10479) [ML] Umbrella: Ensemble training and inference

2018-11-29 Thread Yury Babak (JIRA)
Yury Babak created IGNITE-10479:
---

 Summary: [ML] Umbrella: Ensemble training and inference
 Key: IGNITE-10479
 URL: https://issues.apache.org/jira/browse/IGNITE-10479
 Project: Ignite
  Issue Type: New Feature
  Components: ml
Reporter: Yury Babak


We want to unify API/usage of any ensembles of models.
Currently we already have only boosting and bagging and we want to implement 
stacking.

Stacking is an ensemble learning technique that combines multiple 
classification or regression models via a meta-classifier or a meta-regressor. 
The base level models are trained based on a complete training set, then the 
meta-model is trained on the outputs of the base level model as features.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5532: IGNITE-10353: Spring Add Update/Delete support fo...

2018-11-29 Thread JonathanGiovanny
GitHub user JonathanGiovanny opened a pull request:

https://github.com/apache/ignite/pull/5532

IGNITE-10353: Spring Add Update/Delete support for Spring Data

Added Query support for Delete, Remove and ```@Query``` support for other 
DML queries including ```UPDATE```, ```DELETE``` and ```MERGE```

https://issues.apache.org/jira/browse/IGNITE-10353

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/JonathanGiovanny/ignite IGNITE-10353

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5532.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5532


commit 08dba0c339af450d67fb9b83991d372401c64da8
Author: Jonathan Camargo 
Date:   2018-11-28T21:27:55Z

IGNITE-10353: Spring Add Update/Delete support for Spring Data




---


Re: Change code style inspections to use red mark for those used at Teamcity build checks (IGNITE-10450)

2018-11-29 Thread oignatenko
Hi Ivan,


Павлухин Иван wrote
> P.S. Did not really get what was the main problem related to
> IGNITE-10399 [1]. I suppose it should go: simple problem -- fast fix.

Well, speaking of IGNITE-10399 it turned out that problem is indeed simple
and fix was fast, just as you wrote.

What looked wrong is that discovery of the problem took quite long and it
turned out to be too much effort for such a simple problem and fix.

To start with, it is essentially impossible to find out when coding. You can
see why is that at screen shot reproducing how missing import looked like in
the file that caused this issue (attached to IGNITE-10450):
https://issues.apache.org/jira/secure/attachment/12950020/IDEA.inspections.TC-bot.obscured.png

Please notice how mark denoting troublesome inspection violation is buried
among non-critical ones which makes it very easy to miss when coding.

Another problem is, the only reliable way to tell if there is an issue is to
run Teamcity checks. Which means one needs to push and wait for quite a
while to just find out if there is a problem or not (note by the way how
this way breaks if one is working offline or if Teamcity server is down /
busy for some reason).

And this is not yet the end - even when you learn that some inspection
failed somewhere there is no easy way to just get back to your code and find
it - because as you can see from above screen shot the problem is obscured
too much to notice. One needs to get to Teamcity report and read details of
the failure to find out where to look for it.

Above sounds a bit too much for finding and fixing a simple missing import
isn't it?

Do we really need to push, launch TC checks, wait for completion and read
detailed report to simply find out an issue that was already reported and
highlighted in IDE before all that, only hard to notice.

---

When I realized that I thought maybe I would prefer if IDE could highlight
such inspections more prominently to let me find it just when I introduce
them during coding, without cumbersome messing with Teamcity checks and
reports and I went to Maxim and he helped me find how this could be done.

And after checking how it would work I created IGNITE-10450 for us to try
this way - because it looked so much more convenient compared to what I
observed in IGNITE-10399.

regards Oleg



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: Apache Ignite 2.7. Last Mile

2018-11-29 Thread Yakov Zhdanov
Vladimir, can you please take a look at
https://issues.apache.org/jira/browse/IGNITE-10376?

--Yakov


[GitHub] ignite pull request #5531: IGNITE-5759 unmuted testPartitionRent test

2018-11-29 Thread akalash
GitHub user akalash opened a pull request:

https://github.com/apache/ignite/pull/5531

IGNITE-5759 unmuted testPartitionRent test



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-5759-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5531.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5531


commit 5d4ae967cb537e284c601e05955a588049ab4f08
Author: Anton Kalashnikov 
Date:   2018-11-29T15:26:19Z

IGNITE-5759 unmuted testPartitionRent test




---


[jira] [Created] (IGNITE-10478) SQL index creation fails if cache, which query method was called, was statically configured

2018-11-29 Thread Eduard Shangareev (JIRA)
Eduard Shangareev created IGNITE-10478:
--

 Summary: SQL index creation fails if cache, which query method was 
called, was statically configured
 Key: IGNITE-10478
 URL: https://issues.apache.org/jira/browse/IGNITE-10478
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Eduard Shangareev


{code}
javax.cache.CacheException: Cache doesn't exist: cache2
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:698)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:637)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:455)
at 
org.gridgain.grid.internal.processors.cache.database.IgniteDbSnapshotSameTopologyTest.testDynamicIndexForStaticCacheRestoredWithRebuild(IgniteDbSnapshotSameTopologyTest.java:1469)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2166)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:144)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:2082)
at java.lang.Thread.run(Thread.java:745)
Caused by: class 
org.apache.ignite.internal.processors.query.IgniteSQLException: Cache doesn't 
exist: cache2
at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.convert(DdlStatementsProcessor.java:642)
at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.runDdlStatement(DdlStatementsProcessor.java:242)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFieldsNative(IgniteH2Indexing.java:1661)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1823)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2175)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2170)
at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2678)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$0(GridQueryProcessor.java:2184)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2204)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2165)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2126)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:686)
... 12 more
{code}

As I see it is thrown from 
{code}
 if (cache == null || !F.eq(depId, cache.context().dynamicDeploymentId()))
throw new 
SchemaOperationException(SchemaOperationException.CODE_CACHE_NOT_FOUND, 
cacheName);
{code}
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor#processSchemaOperationLocal.

And the second check fails.

It looks strange from my point of view why we throw CODE_CACHE_NOT_FOUND in 
this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5526: IGNITE-10449: Fix javadoc and typos in ML module.

2018-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5526


---


Re: Change code style inspections to use red mark for those used at Teamcity build checks (IGNITE-10450)

2018-11-29 Thread Павлухин Иван
Hi Oleg,

First of all I am not against giving it a try. Let's try it.

P.S. Did not really get what was the main problem related to
IGNITE-10399 [1]. I suppose it should go: simple problem -- fast fix.

[1] https://issues.apache.org/jira/browse/IGNITE-10399
чт, 29 нояб. 2018 г. в 16:04, oignatenko :
>
> Hi Ivan,
>
> You are right that these things will distract people and from this
> perspective it is very well justified that vast majority of style deviations
> (currently, all of them) are marked yellow. These are non-critical and if
> developer ignores them nothing immediately bad happens.
>
> (For the sake of completeness one can argue that style deviations contribute
> to technical debt but it's tangential to current discussion and here we can
> probably consider them just harmless for simplicity.)
>
> The problem with five of these inspections that are proposed to change is,
> these became different after IGNITE-9983 and above reasoning doesn't work
> for these anymore. These five inspections are now checked at Teamcity and
> when there are new deviations it reports problems.
>
> Essentially this means that developer introducing new violations in these
> inspections is going to be distracted anyway - if they ignore at the coding
> phase they still will be chased by the warnings from Teamcity. You can check
> details of IGNITE-10399 because it shows a good example of how it goes.
>
> So what we're discussing here is essentially not about whether to distract
> developer or not (because they will be distracted anyway) but when it is
> more convenient to distract - at coding time or after Teamcity check.
> Granted, delaying this to TC checks felt okay to me before we tried it but
> observing how it really goes (in mentioned above IGNITE-10399) made me
> curious if maybe we could try another option by raising this issue at coding
> time instead.
>
> This is the whole point of this change, to let us try how it would go if we
> warn developers about inspections impacting Teamcity in the time of coding.
> As I wrote in beginning of this thread I briefly tried it myself and it
> looked quite promising.
>
> To avoid misunderstanding, I would like to make it clear that at this point
> it is not supposed to be a one way change because my testing was too brief
> to say for sure if this is the way to go. Current plan is that we give it a
> try for a while and later - depending on how folks feel - decide whether to
> keep these inspections red or revert them back to yellow.
>
> Does that make sense Ivan?
>
> regards, Oleg
>
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


[GitHub] ignite pull request #5530: Ignite ign 12530

2018-11-29 Thread aealeksandrov
GitHub user aealeksandrov opened a pull request:

https://github.com/apache/ignite/pull/5530

Ignite ign 12530

created for TC run

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-ign-12530

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5530.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5530


commit afc5fc1789d75573f71b40c5e241484c7a578197
Author: ezhuravl 
Date:   2018-04-17T15:41:56Z

IGNITE-6113 Changed mistake in version for partition demand version

(cherry picked from commit 83b9ffd)

commit 25f6a2013aa584559623410b7a96951f79fb00ff
Author: Ivan Daschinskiy 
Date:   2018-04-17T16:55:42Z

IGNITE-8021 Delete cache config files when cache is destroyed - Fixes #3697.

Signed-off-by: Alexey Goncharuk 
(cherry picked from commit 2edcb22fbb566981097733af6470ed6dde8e786b)

commit 5461dd64ee15f02be7934b33d0ca92130aa70512
Author: Ilya Kasnacheev 
Date:   2018-04-17T17:04:28Z

IGNITE-2766 Fix .net test.

commit 9f5b27fae9ac57ae5b256cb8593dfe587b4accb8
Author: oleg-ostanin 
Date:   2018-04-17T17:58:53Z

IGNITE-8274 sqlline.sh script uses JAVA_HOME now

Signed-off-by: Andrey Gura 

(cherry picked from commit c3ff274)

commit 640167f2c9384fddd69e6244b615e4974bfe2b50
Author: Maxim Muzafarov 
Date:   2018-04-18T09:20:13Z

IGNITE-8301 testReconnectCacheDestroyedAndCreated should excpect recreated 
client cache.

Cherry-picked from 56be24b9dfc14023bacaab63f40e0504b317eda3

commit 89b8426a2a113b6893a2295044d6dc0e94015a94
Author: Alexey Kuznetsov 
Date:   2018-04-18T11:49:12Z

ignite-2.4.4 Fixed default node version.

commit 048c21a3cc7d00a1c5951137f3747904e00405ea
Author: Alexey Kuznetsov 
Date:   2018-04-19T07:14:51Z

IGNITE-8298 Web Console: Fixed tables UI issues.

(cherry picked from commit f3848a2)

commit 18a3ba0f6dc07729f78a24b345dbfc1588cdb4c2
Author: Dmitriy Shabalin 
Date:   2018-04-19T08:16:18Z

IGNITE-8298 Web Console: Fixed loader under Safari.

(cherry picked from commit 0897309)

commit 0499793d49d5e48d5fdec97bbb8c2ac609e5056e
Author: Ivan Daschinskiy 
Date:   2018-04-19T12:25:23Z

IGNITE-8021 Fixed tests - Fixes #3864.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit 8fc1824)

commit 8b21e0b36d7d035ff52bcff067f002140f4b8b97
Author: Alexey Kuznetsov 
Date:   2018-03-23T10:53:15Z

IGNITE-7119 Web Agent: Implemented support for comma-separated list of node 
URIs.

(cherry picked from commit ee0f4c0)

commit a9f63143059fc489342cadc0c89d7d8fd389fdff
Author: Denis Mekhanikov 
Date:   2018-04-20T14:11:36Z

ignite-8205 Clear list of local services in 
GridServiceProcessor#onKernalStop

Signed-off-by: Andrey Gura 
(cherry picked from commit fbe24f8e3b0d9016a69670ca2bc50766865adf38)

commit 2aa4d60df18e57f28814675cf37298ba952035b7
Author: Denis Mekhanikov 
Date:   2018-04-20T15:41:06Z

IGNITE-8134 Subscribe to system cache events on nodes outside BLT

Signed-off-by: Andrey Gura 
(cherry picked from commit c82277eb4e48f95dfec8cb0206c019820a765432)

commit ef140ce1102c37295fe9c52d4fcc52b7bdd2bb09
Author: Alexey Kuznetsov 
Date:   2018-04-23T08:44:09Z

IGNITE-8298 Web Console: Fixed tables UI issues.

commit 561950f4afc37a078eefc54664f56bdff6d2dcfd
Author: Anton Kurbanov 
Date:   2018-04-21T18:23:21Z

IGNITE-8154 - Add an ability to provide ExceptionListener to JmsStreamer - 
Fixes #3828

Signed-off-by: Valentin Kulichenko 

commit 1dbd6970fd2ce611c0cbbfa9256b08a934fc8666
Author: Anton Kurbanov 
Date:   2018-04-23T09:24:50Z

Merge branch 'ignite-2.4-master' of 
https://github.com/gridgain/apache-ignite into ignite-2.4-master

commit cafbff336761c5464cb60b68b0f7193d5c998d9f
Author: Andrey V. Mashenkov 
Date:   2018-04-16T17:43:36Z

IGNITE-7972 Fixed NPE in TTL manager on unwindEvicts. - Fixes #3810.

Signed-off-by: dpavlov 

(cherry picked from commit 737933e)

commit 16fa0132be0cce8e2af2566fd7ad06a741b5fee0
Author: Andrey V. Mashenkov 
Date:   2018-02-07T15:25:25Z

IGNITE-7508: Fix contention on system property access in 
GridKernalContextImpl::isDaemon(). This closes #3468.

(cherry picked from commit d2b41a0)

commit 996e3f5b39746777eecad73bc303838fe76121c2
Author: tledkov-gridgain 
Date:   2018-04-23T15:20:21Z

IGNITE-8355 Fixed NPE on concurrent nodes start - Fixes #3899.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit 5a981c9)

commit b9bcc8d6bbcbe3d78e171d55ebedb14d21b28c26
Author: tledkov-gridgain 
Date:   2018-04-23T15:20:21Z

IGNITE-8355 Fixed NPE on concurrent nodes start - Fixes #3899.

Signed-off-by: Alexey Goncharuk 

(cherry picked from commit 5a981c9)

commit 

Re: Apache Ignite 2.7. Last Mile

2018-11-29 Thread Ivan Fedotov
Hello Igniters.

During my work at the ticket IGNITE-10376
 I found that it
started to fail after integration Continuous Query with MVCC.

I launched 
CacheContinuousQueryOrderingEventTest.testAtomicOnheapTwoBackupAsyncFullSync
before corresponding commit

and it works correctly more that 500 times. So, I think it is regression
test, because in testAtomicOnheapTwoBackupAsyncFullSync atomic cache mode
is used.

What do you think, how does such problem relate with MVCC?
What could be a possible way to resolve it?

ср, 28 нояб. 2018 г. в 12:19, Vladimir Ozerov :

> Fixed. Thank you for noting it.
>
> On Wed, Nov 28, 2018 at 6:22 AM Alexey Kuznetsov 
> wrote:
>
> > Hi,
> >
> > We found a regression https://issues.apache.org/jira/browse/IGNITE-10432
> >
> > Please take a look.
> >
> > --
> > Alexey Kuznetsov
> >
>


-- 
Ivan Fedotov.

ivanan...@gmail.com


[GitHub] ignite pull request #5485: IGNITE-10354 Failing client node due to not recei...

2018-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5485


---


[jira] [Created] (IGNITE-10477) An empty cluster falls if wal segment size small

2018-11-29 Thread ARomantsov (JIRA)
ARomantsov created IGNITE-10477:
---

 Summary:  An empty cluster falls if wal segment size small
 Key: IGNITE-10477
 URL: https://issues.apache.org/jira/browse/IGNITE-10477
 Project: Ignite
  Issue Type: Bug
  Components: data structures
Affects Versions: 2.7
Reporter: ARomantsov
 Fix For: 2.8


I set   and try to activate 
empty cluster.
Get cluster drop by handler and next error
{code:java}
[15:45:12,723][SEVERE][db-checkpoint-thread-#99][] Critical system error 
detected. Will be handled accordingly to configured handler [hnd=class 
o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
[type=SYSTEM_WORKER_TE
RMINATION, err=java.lang.IllegalArgumentException: Record is too long 
[capacity=100, size=1802204]]]
java.lang.IllegalArgumentException: Record is too long [capacity=100, 
size=1802204]
at 
org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBuffer.offer0(SegmentedRingByteBuffer.java:214)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBuffer.offer(SegmentedRingByteBuffer.java:193)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileWriteHandle.addRecord(FileWriteAheadLogManager.java:2472)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileWriteHandle.access$1600(FileWriteAheadLogManager.java:2376)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.log(FileWriteAheadLogManager.java:821)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointBegin(GridCacheDatabaseSharedManager.java:3604)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.doCheckpoint(GridCacheDatabaseSharedManager.java:3091)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.body(GridCacheDatabaseSharedManager.java:2990)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10476) Merge similar tests.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10476:
-

 Summary: Merge similar tests.
 Key: IGNITE-10476
 URL: https://issues.apache.org/jira/browse/IGNITE-10476
 Project: Ignite
  Issue Type: Test
Reporter: Andrew Mashenkov


CacheNamesSelfTest and CacheNamesWithSpecialCharactersTest looks similar and 
can be merged.

We already have test suite these tests are related to, so we can merge them 
into GridCacheConfigurationValidationSelfTest.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Change code style inspections to use red mark for those used at Teamcity build checks (IGNITE-10450)

2018-11-29 Thread oignatenko
Hi Ivan,

You are right that these things will distract people and from this
perspective it is very well justified that vast majority of style deviations
(currently, all of them) are marked yellow. These are non-critical and if
developer ignores them nothing immediately bad happens.

(For the sake of completeness one can argue that style deviations contribute
to technical debt but it's tangential to current discussion and here we can
probably consider them just harmless for simplicity.)

The problem with five of these inspections that are proposed to change is,
these became different after IGNITE-9983 and above reasoning doesn't work
for these anymore. These five inspections are now checked at Teamcity and
when there are new deviations it reports problems.

Essentially this means that developer introducing new violations in these
inspections is going to be distracted anyway - if they ignore at the coding
phase they still will be chased by the warnings from Teamcity. You can check
details of IGNITE-10399 because it shows a good example of how it goes.

So what we're discussing here is essentially not about whether to distract
developer or not (because they will be distracted anyway) but when it is
more convenient to distract - at coding time or after Teamcity check.
Granted, delaying this to TC checks felt okay to me before we tried it but
observing how it really goes (in mentioned above IGNITE-10399) made me
curious if maybe we could try another option by raising this issue at coding
time instead.

This is the whole point of this change, to let us try how it would go if we
warn developers about inspections impacting Teamcity in the time of coding.
As I wrote in beginning of this thread I briefly tried it myself and it
looked quite promising.

To avoid misunderstanding, I would like to make it clear that at this point
it is not supposed to be a one way change because my testing was too brief
to say for sure if this is the way to go. Current plan is that we give it a
try for a while and later - depending on how folks feel - decide whether to
keep these inspections red or revert them back to yellow.

Does that make sense Ivan?

regards, Oleg




--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[jira] [Created] (IGNITE-10475) Introduce IDEA async debugger annotations.

2018-11-29 Thread Ivan Bessonov (JIRA)
Ivan Bessonov created IGNITE-10475:
--

 Summary: Introduce IDEA async debugger annotations.
 Key: IGNITE-10475
 URL: https://issues.apache.org/jira/browse/IGNITE-10475
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Bessonov
Assignee: Ivan Bessonov


"JetBrains Java Annotation" library introduced "@Async" annotation in version 
16:
https://www.jetbrains.com/help/idea/async-stacktraces.html
Since we use this version now we may as well integrate "@Async" into 
"IgniteFuture" and maybe other suitable classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10474) MVCC: IgniteCacheConnectionRecovery10ConnectionsTest.testConnectionRecovery fails.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10474:
-

 Summary: MVCC: 
IgniteCacheConnectionRecovery10ConnectionsTest.testConnectionRecovery fails.
 Key: IGNITE-10474
 URL: https://issues.apache.org/jira/browse/IGNITE-10474
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Andrew Mashenkov


IgniteCacheConnectionRecovery10ConnectionsTest.testConnectionRecovery fails due 
to hanging.

We have to investigate and fix this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10473) MVCC: Add support for nested system transactions.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10473:
-

 Summary: MVCC: Add support for nested system transactions.
 Key: IGNITE-10473
 URL: https://issues.apache.org/jira/browse/IGNITE-10473
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Andrew Mashenkov


IgniteCacheSystemTransactionsSelfTest.testSystemTxInsideUserTx fails in Mvcc 
mode due to  inner system-tx commits external mvcc-tx unexpectedly.

We have to analyse cases where system tx can be used and deprecated such cases.

Then we have to either add MVCC support for system caches somehow or allow 
non-mvcc nested tx.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ololo3000 opened a new pull request #85: IGNITE-10436 Add ticket and PR links on report TC Bot page

2018-11-29 Thread GitBox
ololo3000 opened a new pull request #85: IGNITE-10436 Add ticket and PR links 
on report TC Bot page
URL: https://github.com/apache/ignite-teamcity-bot/pull/85
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (IGNITE-10472) MVCC: EntryProcessor resource injection missed.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10472:
-

 Summary: MVCC: EntryProcessor resource injection missed.
 Key: IGNITE-10472
 URL: https://issues.apache.org/jira/browse/IGNITE-10472
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Andrew Mashenkov


IgniteCacheMvccTxInvokeTest.testInvokeAllAppliedOnceOnBinaryTypeRegistration 
fails with NPE, see stacktrace below.

Seems, we forget to inject resources into EntryProcessor.

 

 
{noformat}
java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.cache.IgniteCacheInvokeAbstractTest$2.process(IgniteCacheInvokeAbstractTest.java:345)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.applyEntryProcessor(IgniteCacheOffheapManagerImpl.java:2096)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.mvccUpdate(IgniteCacheOffheapManagerImpl.java:1998)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.mvccUpdate(IgniteCacheOffheapManagerImpl.java:543)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.mvccSet(GridCacheMapEntry.java:1142)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.continueLoop(GridDhtTxAbstractEnlistFuture.java:463)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.init(GridDhtTxAbstractEnlistFuture.java:363)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.processNearTxEnlistRequest(GridDhtTransactionalCacheAdapter.java:2064)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.access$900(GridDhtTransactionalCacheAdapter.java:112)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$14.apply(GridDhtTransactionalCacheAdapter.java:229)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$14.apply(GridDhtTransactionalCacheAdapter.java:227)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1059)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:584)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:383)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:309)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:100)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:299)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1568)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1196)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1092)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10471) SQL: Alter table to add column sytax doesn't support the user types but drop column can drop such fields

2018-11-29 Thread Andrey Aleksandrov (JIRA)
Andrey Aleksandrov created IGNITE-10471:
---

 Summary: SQL: Alter table to add column sytax doesn't support the 
user types but drop column can drop such fields
 Key: IGNITE-10471
 URL: https://issues.apache.org/jira/browse/IGNITE-10471
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.6
Reporter: Andrey Aleksandrov
 Fix For: 2.8


Looks like ALTER COLUMN commands could be used only on a subset of supported 
types (that could be used in CREATE TABLE syntax):

[https://apacheignite-sql.readme.io/docs/data-types]

But using query entities we can provide to use a field that has user types.

This fields can be used in 

ALTER TABLE CACHE_NAME DROP COLUMN COLUMN_NAME

But it can't be used as next:

ALTER TABLE CACHE_NAME  ADD COLUMN COLUMN_NAME 'type'

Where type is a user's one.

Possible we should think about supporting POJO and enums.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10470) MVCC: Missed error handling for mvcc transaction updates.

2018-11-29 Thread Roman Kondakov (JIRA)
Roman Kondakov created IGNITE-10470:
---

 Summary: MVCC: Missed error handling for mvcc transaction updates.
 Key: IGNITE-10470
 URL: https://issues.apache.org/jira/browse/IGNITE-10470
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Roman Kondakov


Critical errors (page corruption, etc.) during mvcc updates/reads should be 
covered by configured error handler. 
See {{TransactionIntegrityWithPrimaryIndexCorruptionTest}} with enabled mvcc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5355: Tdr 127 ag

2018-11-29 Thread agura
Github user agura closed the pull request at:

https://github.com/apache/ignite/pull/5355


---


[GitHub] ignite pull request #5372: Tdr 127 ag master

2018-11-29 Thread agura
Github user agura closed the pull request at:

https://github.com/apache/ignite/pull/5372


---


[GitHub] ignite pull request #5501: IGNITE-10412 Reproducer

2018-11-29 Thread agura
Github user agura closed the pull request at:

https://github.com/apache/ignite/pull/5501


---


[jira] [Created] (IGNITE-10469) TcpCommunicationSpi does not break tcp connection after IdleConnectionTimeout seconds of inactivity

2018-11-29 Thread Igor Kamyshnikov (JIRA)
Igor Kamyshnikov created IGNITE-10469:
-

 Summary: TcpCommunicationSpi does not break tcp connection after 
IdleConnectionTimeout seconds of inactivity
 Key: IGNITE-10469
 URL: https://issues.apache.org/jira/browse/IGNITE-10469
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.6, 2.5
Reporter: Igor Kamyshnikov
 Attachments: GridTcpCommunicationSpiIdleCommunicationTimeoutTest.java, 
ignite_idle_test.zip

TcpCommunicationSpi does not close TCP connections after they have been idle 
for more than configured in TcpCommunicationSpi#idleConnTimeout amount of time 
(default is 10 minutes).

There are environments where idle TCP connections become unusable: connections 
remain ESTABLISHED while actual data to be sent piles up in Send-Q (according 
to netstat). For this reason Ignite stack does not recognize a communication 
problem for a considerable amount of time (~ 10-15 minutes), and it does not 
begin its reconnection procedure (hearbeats use different tcp connections that 
are not idle and don't have this issue).

I've discovered though there is a logic in the Ignite code to detect and close 
idle connections. But due to a problem in the code it does not work reliably.

This is a test that _sometimes_ reproduces the problem.
[^ignite_idle_test.zip] - full test project
[^GridTcpCommunicationSpiIdleCommunicationTimeoutTest.java] - just test code

What's the problem in the Ignite code?

There are two loops in the Ignite code that have a chance to close idle 
connections:
1) 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.CommunicationWorker#processIdle
 - this one is executed each *IdleConnectionTimeout* milliseconds. (it can 
close idle connections but it typically turns out that it thinks that 
connection is not idle, thanks to the second loop).
2) 
org.apache.ignite.internal.util.nio.GridNioServer.AbstractNioClientWorker#bodyInternal
 -> 
org.apache.ignite.internal.util.nio.GridNioServer.AbstractNioClientWorker#checkIdle
 - this loop executes:
{noformat}
filterChain.onSessionIdleTimeout(ses); <-- does not actually close an idle 
connection
// Update timestamp to avoid multiple notifications within one timeout interval.
ses.resetSendScheduleTime(); <--- resets idle timer
ses.bytesReceived(0);
{noformat}

---
To wind up, may be the whole approach should be reviewed:
 - is it ok not to track message delivery time?
 - is it ok not to do heartbeating using the same connections as for 
get/put/... commands?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10468) MVCC TX: Failover

2018-11-29 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-10468:
-

 Summary: MVCC TX: Failover
 Key: IGNITE-10468
 URL: https://issues.apache.org/jira/browse/IGNITE-10468
 Project: Ignite
  Issue Type: Task
Reporter: Igor Seliverstov


There is several problems in mvcc failover scenarios.

The task is an umbrella ticket for currently known issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10467) MVCC: Failure handling for vacuum threads.

2018-11-29 Thread Roman Kondakov (JIRA)
Roman Kondakov created IGNITE-10467:
---

 Summary: MVCC: Failure handling for vacuum threads.
 Key: IGNITE-10467
 URL: https://issues.apache.org/jira/browse/IGNITE-10467
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Roman Kondakov


At the moment any critical error in vacuum threads (vacuum-worker, 
vacuum-scheduler) is not handled in any way. 

{{FailureHandler}} implementation should be used to manage such errors. It is 
also need to separate valid cases (partition eviction) from others (page 
corruption, etc.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10466) Test IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse fails sporadically.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10466:
-

 Summary: Test 
IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse fails 
sporadically.
 Key: IGNITE-10466
 URL: https://issues.apache.org/jira/browse/IGNITE-10466
 Project: Ignite
  Issue Type: Bug
  Components: general
Reporter: Andrew Mashenkov


PDS 4 suite fails sporadically on TC when TC is highly loaded.

Looks like there is a connectivity issues on client node after cluster 
activation and before failure. 
But TimeoutProcessor fails on client node with no obvious reason.

 

 
{noformat}
[22:45:44]W: [org.apache.ignite:ignite-core] class 
org.apache.ignite.IgniteException: GridWorker [name=grid-nio-worker-tcp-comm-1, 
igniteInstanceName=client0, finished=false, heartbeatTs=15434
[22:45:44]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1833)
[22:45:44]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1828)
[22:45:44]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
[22:45:44]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
[22:45:44]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$TimeoutWorker.body(GridTimeoutProcessor.java:221)
[22:45:44]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
[22:45:44]W: [org.apache.ignite:ignite-core] at 
java.lang.Thread.run(Thread.java:748)
{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Historical rebalance

2018-11-29 Thread Seliverstov Igor
Ivan,

different transactions may be applied in different order on backup nodes.
That's why we need an active tx set
and some sorting by their update times. The idea is to identify a point in
time which starting from we may lost some updates.
This point:
   1) is the last acknowledged by all backups (including possible further
demander) update on timeline;
   2) have a specific update counter (aka back-counter) which we going to
start iteration from.

After additional thinking on, I've identified a rule:

There is two fences:
  1) update counter (UC) - this means that all updates, with less UC than
applied one, was applied on a node, having this UC.
  2) update in scope of TX - all updates are applied one by one
sequentially, this means that the fact of update guaranties the previous
update (statement) was finished on all TX participants.

Сombining them, we can say the next:

All updates, that was acknowledged at the time the last update of tx, which
updated UC, applied, are guaranteed to be presented on a node having such UC

We can use this rule to find an iterator start pointer.

ср, 28 нояб. 2018 г. в 20:26, Павлухин Иван :

> Guys,
>
> Another one idea. We can introduce additional update counter which is
> incremented by MVCC transactions right after executing operation (like
> is done for classic transactions). And we can use that counter for
> searching needed WAL records. Can it did the trick?
>
> P.S. Mentally I am trying to separate facilities providing
> transactions and durability. And it seems to me that those facilities
> are in different dimensions.
> ср, 28 нояб. 2018 г. в 16:26, Павлухин Иван :
> >
> > Sorry, if it was stated that a SINGLE transaction updates are applied
> > in a same order on all replicas then I have no questions so far. I
> > thought about reordering updates coming from different transactions.
> > > I have not got why we can assume that reordering is not possible. What
> > have I missed?
> > ср, 28 нояб. 2018 г. в 13:26, Павлухин Иван :
> > >
> > > Hi,
> > >
> > > Regarding Vladimir's new idea.
> > > > We assume that transaction can be represented as a set of
> independent operations, which are applied in the same order on both primary
> and backup nodes.
> > > I have not got why we can assume that reordering is not possible. What
> > > have I missed?
> > > вт, 27 нояб. 2018 г. в 14:42, Seliverstov Igor :
> > > >
> > > > Vladimir,
> > > >
> > > > I think I got your point,
> > > >
> > > > It should work if we do the next:
> > > > introduce two structures: active list (txs) and candidate list
> (updCntr ->
> > > > txn pairs)
> > > >
> > > > Track active txs, mapping them to actual update counter at update
> time.
> > > > On each next update put update counter, associated with previous
> update,
> > > > into a candidates list possibly overwrite existing value (checking
> txn)
> > > > On tx finish remove tx from active list only if appropriate update
> counter
> > > > (associated with finished tx) is applied.
> > > > On update counter update set the minimal update counter from the
> candidates
> > > > list as a back-counter, clear the candidate list and remove an
> associated
> > > > tx from the active list if present.
> > > > Use back-counter instead of actual update counter in demand message.
> > > >
> > > > вт, 27 нояб. 2018 г. в 12:56, Seliverstov Igor  >:
> > > >
> > > > > Ivan,
> > > > >
> > > > > 1) The list is saved on each checkpoint, wholly (all transactions
> in
> > > > > active state at checkpoint begin).
> > > > > We need whole the list to get oldest transaction because after
> > > > > the previous oldest tx finishes, we need to get the following one.
> > > > >
> > > > > 2) I guess there is a description of how persistent storage works
> and how
> > > > > it restores [1]
> > > > >
> > > > > Vladimir,
> > > > >
> > > > > the whole list of what we going to store on checkpoint (updated):
> > > > > 1) Partition counter low watermark (LWM)
> > > > > 2) WAL pointer of earliest active transaction write to partition
> at the
> > > > > time the checkpoint have started
> > > > > 3) List of prepared txs with acquired partition counters (which
> were
> > > > > acquired but not applied yet)
> > > > >
> > > > > This way we don't need any additional info in demand message.
> Start point
> > > > > can be easily determined using stored WAL "back-pointer".
> > > > >
> > > > > [1]
> > > > >
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-LocalRecoveryProcess
> > > > >
> > > > >
> > > > > вт, 27 нояб. 2018 г. в 11:19, Vladimir Ozerov <
> voze...@gridgain.com>:
> > > > >
> > > > >> Igor,
> > > > >>
> > > > >> Could you please elaborate - what is the whole set of information
> we are
> > > > >> going to save at checkpoint time? From what I understand this
> should be:
> > > > >> 1) List of active transactions with WAL pointers of their first
> writes
> > > > >> 2) List of prepared transactions with 

[jira] [Created] (IGNITE-10465) TTL Worker can fails on node start due to a race.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10465:
-

 Summary: TTL Worker can fails on node start due to a race.
 Key: IGNITE-10465
 URL: https://issues.apache.org/jira/browse/IGNITE-10465
 Project: Ignite
  Issue Type: Bug
  Components: cache, persistence
Reporter: Andrew Mashenkov


PDS 3 suite timeouts sporadicaly on TC if TC is under high load.
Seems, there is a race and TtlWorker starts before node has joined. 

Here is failure dump:
{noformat}
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.getGridStartTime(TcpDiscoverySpi.java:1456)
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.gridStartTime(GridDiscoveryManager.java:2245)
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.version.GridCacheVersionManager.next(GridCacheVersionManager.java:279)
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.version.GridCacheVersionManager.next(GridCacheVersionManager.java:201)
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpiredInternal(GridCacheOffheapM
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpired(GridCacheOffheapManager.j
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.expire(GridCacheOffheapManager.java:986)
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:207)
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:141
[17:32:47]W: [org.apache.ignite:ignite-core] at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120){noformat}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10460) MVCC: Create "Cache (Failover) 3" test suite for MVCC mode.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10460:
-

 Summary: MVCC: Create "Cache (Failover) 3" test suite for MVCC 
mode.
 Key: IGNITE-10460
 URL: https://issues.apache.org/jira/browse/IGNITE-10460
 Project: Ignite
  Issue Type: Sub-task
  Components: mvcc
Reporter: Andrew Mashenkov


Create MVCC version of IgniteCacheFailoverTestSuite2 and add it to TC.

All non-relevant tests should be marked as ignored.
 Failed tests should be muted and tickets should be created for unknown failure 
reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10464) MVCC: Create "PDS 4" test suite for MVCC mode.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10464:
-

 Summary: MVCC: Create "PDS 4" test suite for MVCC mode.
 Key: IGNITE-10464
 URL: https://issues.apache.org/jira/browse/IGNITE-10464
 Project: Ignite
  Issue Type: Sub-task
  Components: mvcc
Reporter: Andrew Mashenkov


Create MVCC version of IgnitePdsTestSuite3 and add it to TC.

All non-relevant tests should be marked as ignored.
 Failed tests should be muted and tickets should be created for unknown failure 
reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10463) MVCC: Create "PDS 3" test suite for MVCC mode.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10463:
-

 Summary: MVCC: Create "PDS 3" test suite for MVCC mode.
 Key: IGNITE-10463
 URL: https://issues.apache.org/jira/browse/IGNITE-10463
 Project: Ignite
  Issue Type: Sub-task
  Components: mvcc
Reporter: Andrew Mashenkov


Create MVCC version of IgnitePdsTestSuite2 and add it to TC.

All non-relevant tests should be marked as ignored.
 Failed tests should be muted and tickets should be created for unknown failure 
reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10462) MVCC: Create "PDS 2" test suite for MVCC mode.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10462:
-

 Summary: MVCC: Create "PDS 2" test suite for MVCC mode.
 Key: IGNITE-10462
 URL: https://issues.apache.org/jira/browse/IGNITE-10462
 Project: Ignite
  Issue Type: Sub-task
  Components: mvcc
Reporter: Andrew Mashenkov


Create MVCC version of IgnitePdsTestSuite and add it to TC.

All non-relevant tests should be marked as ignored.
 Failed tests should be muted and tickets should be created for unknown failure 
reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10461) MVCC: Create "PDS 1" test suite for MVCC mode.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10461:
-

 Summary: MVCC: Create "PDS 1" test suite for MVCC mode.
 Key: IGNITE-10461
 URL: https://issues.apache.org/jira/browse/IGNITE-10461
 Project: Ignite
  Issue Type: Sub-task
  Components: mvcc
Reporter: Andrew Mashenkov


Create MVCC version of IgniteCacheFailoverTestSuite3 and add it to TC.

All non-relevant tests should be marked as ignored.
 Failed tests should be muted and tickets should be created for unknown failure 
reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10459) MVCC: Create "Cache (Failover) 2" test suite for MVCC mode.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10459:
-

 Summary: MVCC: Create "Cache (Failover) 2" test suite for MVCC 
mode.
 Key: IGNITE-10459
 URL: https://issues.apache.org/jira/browse/IGNITE-10459
 Project: Ignite
  Issue Type: Sub-task
  Components: mvcc
Reporter: Andrew Mashenkov


Create MVCC version of IgniteCacheFailoverTestSuite and add it to TC.

All non-relevant tests should be marked as ignored.
 Failed tests should be muted and tickets should be created for unknown failure 
reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10458) MVCC: Create "Cache (Failover) 1" test suite for MVCC mode.

2018-11-29 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10458:
-

 Summary: MVCC: Create "Cache (Failover) 1" test suite for MVCC 
mode.
 Key: IGNITE-10458
 URL: https://issues.apache.org/jira/browse/IGNITE-10458
 Project: Ignite
  Issue Type: Sub-task
  Components: mvcc
Reporter: Andrew Mashenkov


Create MVCC version of IgniteCacheTestSuite9 and add it to TC.

All non-relevant tests should be marked as ignored.
 Failed tests should be muted and tickets should be created for unknown failure 
reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4892: Ignite 2.5.3

2018-11-29 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/4892


---


[GitHub] ignite pull request #5521: IGNITE-10435

2018-11-29 Thread devozerov
Github user devozerov closed the pull request at:

https://github.com/apache/ignite/pull/5521


---


[jira] [Created] (IGNITE-10457) MVCC TX: GridIndexRebuildWithMvccEnabledSelfTest fails

2018-11-29 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-10457:
-

 Summary: MVCC TX: GridIndexRebuildWithMvccEnabledSelfTest fails
 Key: IGNITE-10457
 URL: https://issues.apache.org/jira/browse/IGNITE-10457
 Project: Ignite
  Issue Type: Bug
  Components: mvcc, sql
Reporter: Igor Seliverstov
 Fix For: 2.8


See the log below:
{noformat}
javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
Runtime failure on bounds: [lower=MvccMaxSearchRow [], upper=MvccMinSearchRow 
[]]

at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1756)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1108)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:820)
at 
org.apache.ignite.internal.processors.query.h2.GridIndexRebuildSelfTest.putData(GridIndexRebuildSelfTest.java:191)
at 
org.apache.ignite.internal.processors.query.h2.GridIndexRebuildWithMvccEnabledSelfTest.testIndexRebuild(GridIndexRebuildWithMvccEnabledSelfTest.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$001(GridAbstractTest.java:150)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$6.evaluate(GridAbstractTest.java:2104)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$7.run(GridAbstractTest.java:2119)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteCheckedException: Runtime failure on 
bounds: [lower=MvccMaxSearchRow [], upper=MvccMinSearchRow []]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.visit(BPlusTree.java:1061)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.mvccUpdate(IgniteCacheOffheapManagerImpl.java:1968)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.mvccUpdate(GridCacheOffheapManager.java:2032)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.mvccUpdate(IgniteCacheOffheapManagerImpl.java:543)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.mvccSet(GridCacheMapEntry.java:1142)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.continueLoop(GridDhtTxAbstractEnlistFuture.java:463)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.init(GridDhtTxAbstractEnlistFuture.java:363)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistFuture.enlistLocal(GridNearTxEnlistFuture.java:525)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistFuture.sendBatch(GridNearTxEnlistFuture.java:420)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistFuture.sendNextBatches(GridNearTxEnlistFuture.java:167)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistFuture.map(GridNearTxEnlistFuture.java:143)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxAbstractEnlistFuture.mapOnTopology(GridNearTxAbstractEnlistFuture.java:331)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxAbstractEnlistFuture.init(GridNearTxAbstractEnlistFuture.java:246)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.updateAsync(GridNearTxLocal.java:2076)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.mvccPutAllAsync0(GridNearTxLocal.java:785)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:580)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:446)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2522)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2520)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4284)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2520)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2501)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2478)
at 

[jira] [Created] (IGNITE-10456) Lost data in cache after restart node

2018-11-29 Thread Ivan Fedotov (JIRA)
Ivan Fedotov created IGNITE-10456:
-

 Summary: Lost data in cache after restart node
 Key: IGNITE-10456
 URL: https://issues.apache.org/jira/browse/IGNITE-10456
 Project: Ignite
  Issue Type: Test
Reporter: Ivan Fedotov


Assertion error arrises in 
[testGetRestartPartitioned2|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-5440530939858694701=testDetails]
 during restart nodes. 
It seems that some data [was 
lost|https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheGetRestartTest.java#L188].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10455) MVCC: Tx timeout can cause update counters inconsistency.

2018-11-29 Thread Roman Kondakov (JIRA)
Roman Kondakov created IGNITE-10455:
---

 Summary: MVCC: Tx timeout can cause update counters inconsistency. 
 Key: IGNITE-10455
 URL: https://issues.apache.org/jira/browse/IGNITE-10455
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Roman Kondakov


When transaction is rolled back on backup on prepare step, it could lead to 
update counters inconsistency between primary and backup. We need to fix backup 
counters update.
Reproducer: {{TxWithSmallTimeoutAndContentionOneKeyTest#test}} with enabled 
MVCC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4480: IGNITE-8640 earlier validation of cache config

2018-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4480


---


[jira] [Created] (IGNITE-10454) [TC Bot] Create page with muted tests

2018-11-29 Thread Ryabov Dmitrii (JIRA)
Ryabov Dmitrii created IGNITE-10454:
---

 Summary: [TC Bot] Create page with muted tests
 Key: IGNITE-10454
 URL: https://issues.apache.org/jira/browse/IGNITE-10454
 Project: Ignite
  Issue Type: Task
Reporter: Ryabov Dmitrii
Assignee: Ryabov Dmitrii


We need a page with muted tests. On this page we should have possibility to 
filter tests by fail reason (fail with ticket link or not) and fail rate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5463: IGNITE-10301 GridToStringBuilder is broken for cl...

2018-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5463


---